0% found this document useful (0 votes)
58 views7 pages

Neural Networks: Hong Zhang, Masumi Ishikawa

power

Uploaded by

Mani Kanta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views7 pages

Neural Networks: Hong Zhang, Masumi Ishikawa

power

Uploaded by

Mani Kanta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Neural Networks 23 (2010) 510–516

Contents lists available at ScienceDirect

Neural Networks
journal homepage: www.elsevier.com/locate/neunet

2010 Special Issue

The performance verification of an evolutionary canonical particle


swarm optimizerI
Hong Zhang ∗ , Masumi Ishikawa
Department of Brain Science and Engineering, Graduate School of Life Science & Systems Engineering, Kyushu Institute of Technology, Japan

article info abstract


Article history: We previously proposed to introduce evolutionary computation into particle swarm optimization (PSO),
Received 10 February 2009 named evolutionary PSO (EPSO). It is well known that a constricted version of PSO, i.e., a canonical particle
Received in revised form 1 December 2009 swarm optimizer (CPSO), has good convergence property compared with PSO. For further improving the
Accepted 7 December 2009
search performance of an CPSO, we propose in this paper a new method called an evolutionary canonical
particle swarm optimizer (ECPSO) using the meta-optimization proposed in EPSO. The ECPSO is expected
Keywords:
to be an optimized CPSO in that optimized values of parameters are used in the CPSO. We also introduce
Evolutionary particle swarm optimization
Canonical particle swarm optimizer
a temporally cumulative fitness function into the ECPSO to reduce stochastic fluctuation in evaluating
Temporally cumulative fitness function the fitness function. Our experimental results indicate that (1) the optimized values of parameters are
Real-coded genetic algorithm quite different from those in the conventional CPSO; (2) the search performance by the ECPSO, i.e., the
Exploitation and exploration optimized CPSO, is superior to that by CPSO, OPSO, EPSO, and RGA/E except for the Rastrigin problem.
Model selection © 2009 Elsevier Ltd. All rights reserved.

1. Introduction PSO and a real-coded genetic algorithm (Zhang & Ishikawa, 2007,
2008).
Particle swarm optimization (PSO) is an adaptive, stochastic During the last decade, many variants of the original PSO such
and population-based optimization method proposed by Kennedy as a particle swarm optimizer with inertia (Shi & Eberhart, 1998),
and Eberhart, inspired by the social behavior of flocks of birds and canonical particle swarm optimizer (CPSO) (Clerc & Kennedy,
schools of fish (Eberhart & Kennedy, 1995; Kennedy & Eberhart, 2002), fully informed particle swarm (FIPS) (Kennedy & Mendes,
1995). Its major features include particle dynamics, information 2002), gregarious particle swarm optimizer (G-PSO) (Pasupuleti
exchange, intrinsic memory, and directional search. Due to ease & Battiti, 2000), adaptive hierarchical particle swarm optimizer
of understanding and implementation, and ability in search, PSO (H-PSO) (Janson & Middendorf, 2005), and dissipative particle
has widely been applied to large-scale, highly nonlinear, and swarm optimization (DPSO) (Xie, Zhang, & Yang, 2002) were
complex optimization problems (Eberhart, Shi, & Kennedy, 2001; proposed. Although the mechanisms of these methods are simple
Poli, Kennedy, & Blackwell, 2007; Reyes-Sierra & Coello, 2006). with only a few parameters to adjust, they have better search
Like other optimization methods, an essential issue in PSO is performance compared with other conventional approaches such
efficiency in solving diverse optimization problems. To this end, as machine learning, neural network learning, genetic algorithms,
we believe that systematic selection of values of parameters in and simulated annealing (Abraham, Guo, & Liu, 2006; Kennedy,
PSO is fundamental and the most important. Generally speaking, 2006).
the selection process may be regarded as meta-optimization in The CPSO is a constricted version of the original PSO, which
the sense that the selection process is located at the outer loop decreases the amplitude of oscillation of particles owing to a
in addition to PSO at the inner loop. We previously proposed constriction coefficient, resulting in quick convergence in search.
evolutionary particle swarm optimization (EPSO) by combining Since the values of parameters in the CPSO are based on an analysis
of dynamics of the original PSO (Clerc & Kennedy, 2002), the CPSO
has become a highly influential method in PSO research.
I This paper was originally presented at ICANN2008 (Zhang & Ishikawa, 2008); Despite recent efforts in theoretical analysis (Campana, Fasano,
this is a substantially extended version. Peri, & Pinto, 2006; Clerc, 2006; Trelea, 2003), the values of
∗ Corresponding address: Department of Brain Science and Engineering, Gradu- parameters in the CPSO still remain empirical. This is because
ate School of Life Science & Systems Engineering, Kyushu Institute of Technology, behavior of a swarm of particles is highly stochastic and dynamic. A
2-4, Hibikino, Wakamatsu, 808-0196, Kitakyushu, Japan. Tel.: +81 93 695 6112; fax:
+81 93 695 6112.
major difficulty of the CPSO is that knowledge on search space such
E-mail addresses: [email protected] (H. Zhang), as unimodal or multimodal, which is hard to acquire, is required to
[email protected] (M. Ishikawa). guide a swarm of particles.
0893-6080/$ – see front matter © 2009 Elsevier Ltd. All rights reserved.
doi:10.1016/j.neunet.2009.12.002
H. Zhang, M. Ishikawa / Neural Networks 23 (2010) 510–516 511

3. PSO and the CPSO

Let the search space be N-dimensional, S ∈ <N , the number


of particles of a swarm be P, the position of ith particle be E xi =
(x1 , x2 , . . . , xN ) , and the velocity of the ith particle be vE i =
i i i T

(v1i , v2i , . . . , vNi )T . PSO (Eberhart & Kennedy, 1995; Kennedy &
Eberhart, 1995) is formulated as

Exk+1 = Exik + vEki +1


 i
(1)
vEki +1 = wEvki + w1 Er1 ⊗ (Epik − Exik ) + w2 Er2 ⊗ (Eqk − Exik )
Fig. 1. A family of evolving PSO methods.
where w is an inertial coefficient, w1 is a coefficient for individual
confidence, and w2 is a coefficient for swarm confidence. E r1 and E
r2
To remedy this difficulty, we propose in this paper an are two N-dimensional random vectors, each element of which is
evolutionary canonical particle swarm optimizer (ECPSO) by uniformly distributed on [0, 1], and ⊗ is an element-by-element
introducing evolutionary computation into the CPSO (Zhang & vector multiplication operator. Let g (E xik ) be the fitness value of
i
Ishikawa, 2008). To demonstrate the effectiveness of the ECPSO, the ith particle at time k. pEk is the local best position of the ith
computer experiments on a suite of multidimensional benchmark particle up to now, and q Ek (= arg maxi=1,2,... {g (Epik )}) is the global
problems are carried out. Based on computer experiments, we best position among the whole particles up to now.
analyze characteristics of the ECPSO, and compare the search In the original PSO, the values of parameters are w = 1.0 and
performance with the CPSO, OPSO (optimized particle swarm w1 = w2 = 2.0 (Kennedy & Eberhart, 1995).
optimization) (Meissner, Schmuker, & Schneider, 2006), EPSO, and The CPSO (Clerc & Kennedy, 2002) is defined by
RGA/E (real-coded genetic algorithm with elitism strategy) (Zhang
Exk+1 = Exik+ vEki +1
( i
& Ishikawa, 2005).
(2)

The rest of the paper is organized as follows. Section 2 briefly vEki +1 = χ vEki + c1 Er1 ⊗ (Epik − Exik ) + c2 Er2 ⊗ (Eqk − Exik )
describes the related works. Section 3 gives the mechanisms of PSO
and CPSO. Section 4 introduces the proposed method, the ECPSO, where χ is a constriction coefficient, c1 is a coefficient for
and its criterion in selection of values of parameters. Section 5 individual confidence, and c2 is a coefficient for swarm confidence.
discusses and analyzes the results of computer experiments The relationship between coefficients in Eqs. (1) and (2) is
applied to a suite of multidimensional benchmark problems.
χ =w
(
Section 6 gives the conclusions.
χ c1 = w1 (3)
χ c2 = w2 .
2. Related works
Eq. (3) clearly indicates that the constriction coefficient, χ ,
Researchers have paid much attention to the selection of has control over the velocity of particles, which is considered
parameter values in PSO for improving the search performance, to be crucial in the convergence property of particles (Clerc &
and have proposed a number of algorithms to this end (Beielstein, Kennedy, 2002; Eberhart & Shi, 2000). The value of the constriction
Parsopoulos, & Vrahatis, 2002; Carlisle & Dozier, 2001; Eberhart & coefficient is typically set to
Shi, 2000). Fig. 1 illustrates the relationship among evolving PSO

methods with different mechanisms. χ= , for φ > 4 (4)
|2 − φ − φ 2 − 4φ|
p
As shown in Fig. 1, the composite PSO (Parsopoulos &
Vrahatis, 2002), OPSO, and EPSO are evolutionary versions of PSO, where φ = c1 + c2 , and κ = 1.
respectively. The composite PSO uses a DE (differential evolution) Different values of κ , as well as a theoretical analysis of the
algorithm, which uses a difference vector of two randomly selected derivation of Eq. (4), can be found in Clerc and Kennedy (2002)
boundary vectors for providing heuristics to PSO (Stron & Price, and Clerc (2006). According to Clerc’s constriction method, φ is
1997). Its shortcoming is inevitable fluctuation of the results set to 4.1, c1 and c2 is set to 2.05, respectively, and the constant
caused by the random selection. For systematically estimating multiplier, χ , is approximately 0.7298.
PSO models with superior search performance, Meissner et al. The above values of parameters in the CPSO are frequently
proposed optimized particle swarm optimization, OPSO, using PSO adopted in solving various optimization problems (Lane, Engel-
as a tool for meta-optimization (Meissner et al., 2006). Zhang et al. brecht, & Gain, 2008). Eq. (3) provides the corresponding values of
proposed evolutionary particle swarm optimization (EPSO), which parameters in PSO with w = 0.7298; w1 = w2 = 1.4960 is equiv-
uses RGA/E as a tool for meta-optimization (Zhang & Ishikawa, alent to the CPSO with the above values of parameters and has the
2007, 2008). same behavior as the corresponding CPSO. However, these values
We propose to extend the CPSO, a constricted version of PSO, of parameters are not necessarily optimal. This is exactly the reason
to its evolutionary version, the ECPSO. This is analogous to the why we propose the ECPSO by introducing evolutionary computa-
extension of PSO to EPSO. Evolutionary computation optimizes the tion into the CPSO to further improve the search performance.
values of parameters in the CPSO, which is expected to improve
its search performance. Another extension is to use a temporally
cumulative fitness function instead of an instantaneous fitness 4. Proposed method, the ECPSO
function to reduce stochastic fluctuation in evaluating fitness. Due
to the non-negative constraint on estimated parameters, the values This section presents a basic architecture of the ECPSO with
of some parameters become zero. This can be regarded as selection outer and inner loops, a fitness function, and a method for
of a model structure. evaluating the fitness function under stochastic fluctuation.
512 H. Zhang, M. Ishikawa / Neural Networks 23 (2010) 510–516

4.2. Evaluation of fitness

A fitness value which evaluates the performance of the


CPSO is under stochastic fluctuation. To reduce the stochastic
fluctuation, we proposed the following temporally cumulative
fitness function (Zhang & Ishikawa, 2008).
K
X
F (χ , c1 , c2 ) = g (E
qk ) χ ,c ,c

(5)
1 2
k=1
Fig. 2. Basic concept of the ECPSO.
where K is the maximum number of iterations in the CPSO. Its
statistical variance is known to be inversely proportional to the
4.1. Optimization of the CPSO parameters time span, K .

Fig. 2 illustrates the basic architecture of the ECPSO which 5. Computer experiments
optimizes the values of parameters in the CPSO by evolutionary
computation. It is composed of two loops: an outer loop and an The aims of experiments are to evaluate the effectiveness of the
inner loop. The inner loop runs the CPSO, which solves a given real- proposed method, to analyze the characteristics of the optimized
valued optimization problem with a particular set of parameter CPSO models, and to compare the search performance with other
values in the CPSO. The outer loop runs RGA/E, which simulates the methods, i.e., the original CPSO, OPSO, RGA/E, and EPSO.
survival of the fittest among individuals (individual being a set of
parameter values) over generations for finding the best parameter 5.1. Benchmark problems
values, i.e., χ , c1 , and c2 , in the CPSO, by genetic operations.
The genetic operations adopted here in RGA/E are roulette To facilitate comparison and analysis of the search performance
wheel selection, BLX-α crossover (Eshelman & Schaffer, 1993), of the ECPSO, we use the following suite of multidimensional
random mutation, non-redundant search, rank operation, and benchmark problems (http://www.ntu.edu.sg/home/epnsugan/
mixing operation for efficiently obtaining the optimized parameter index_files/CEC-05/Tech-Report-May-30-05.pdf, 0000) for mini-
values in the CPSO to a given optimization problem. The ith mization with various complexities.
individual, E ei , in a population is represented by an N-dimensional Sphere function:
vector of real numbers, E ei = (ei1 , ei2 , . . . , eiN ). N
X
Roulette wheel selection: The probability of selecting an fSp (E
x) = x2d .
individual, E ei , is expressed by d=1

Griewank function:
g (E
ei )
ei ] =
p[E 1
N N x 
M d
X Y
P
g (E )
em fGr (E
x) = x2d − cos √ + 1.
4000 d=1 d=1 d
m=1

Rastrigin function:
where g (·) is the fitness value of the individual, E ei , and M is the
number of individuals in the population. N 
X 
fRa (E
x) = x2d − 10 cos 2π xd + 10 .

BLX-α crossover3: This reinitializes the values of offspring E eij
with values from an extended range given by the parents (E ei , E
ej ), d=1
ij ij ij ij j
where ek = U (ek min − α I , ek max + α I )1 with ek min = min(eik , ek ), Rosenbrock function:
ij j ij ij
ek max = max(eik , ek ) and I = ek max − ek min . The parameter, α , is a N −1 h
X 2 2 i
predetermined constant. fRo (E
x) = 100 xd+1 − x2d + xd − 1 .
d=1
Random mutation: For a randomly chosen chromosome k of
ei , the allele eik is added by a randomly chosen value,
an individual E Schwefel function:
∆ek ∼ U [−∆eb , ∆eb ]. Note that ∆eb is the given boundary value
i
N X
X d 2
fS w (E
x) = .
0
for a mutant variant in operation. Therefore, the allele, eik , of the xj
0
eik , can be obtained as follows: eik = eik + ∆eik .
offspring, E d=1 j=1

Ranking operation: For executing an elitism strategy, the Hybrid composition function:
individuals in the current population, Pt , are sorted according to 1 1
their fitness. Let the sorted population be Pt00 . fHy (E
x) = fRa (E
x) + 2fS w (E
x) + fGr (E
x) + fSp (E
x).
12 20
Non-redundant search: Delete redundant individuals to in-
crease the probability of finding the optimal values of parameters. The following fitness function in the search space, S ∈ (−5.12,
Let a new population with additional random individuals be Pt000 . 5.12)N , is used.
Mixing operation: Let the population at the next generation 1
be Pt +1 by merging superior individuals from Pt00 and non-inferior gω (E
x) = , (6)
fω (E
x) + 1
individuals from Pt000 . Here, superior individual means an individual
with superior fitness, and inferior individual means an individual where the subscript, ω, stands for one of the following: Sp (Sphere),
with inferior fitness. Gr (Griewank), Ra (Rastrigin), Ro (Rosenbrock), S w (Schwefel), and Hy
(Hybrid composition). fω (E
x) is zero at the global best position; hence
the largest fitness value of Eq. (6) is 1 for all problems.
Fig. 3 visualizes fitness functions for the given benchmark
1 U [a, b] denotes that the uniform probability distribution is on [a b]. problems with two dimensions. The Sphere function is unimodal
H. Zhang, M. Ishikawa / Neural Networks 23 (2010) 510–516 513

(a) Fitness Sp. (b) Fitness Gr. (c) Fitness Ra.

(d) Fitness Ro. (e) Fitness Sw. (f) Fitness Hy.

Fig. 3. The fitness functions on the two-dimensional plane. (a) Sphere problem, (b) Griewank problem, (c) Rastrigin problem, (d) Rosenbrock problem, (e) Schwefel problem,
and (f) Hybrid composition problem.

Table 1 operator makes cuts between genes, which changes the values of
Major parameters in the ECPSO.
the corresponding variables. On the other hand, in a real-coded
Parameter Value GA it just interchanges the variables between individuals without
The number of individuals, M 10 changing their values. Similarly, the mutation operator is only to
The number of generations, G 20 create a new value for the corresponding variable in a real-coded
The number of superior individuals, sn 2 GA. Because of this, generally speaking, the probability of genetic
Probability of BLX-2.0 crossover, pc 0.5
Probability of random mutation, pm 0.5
operation in a real-coded GA could be set to a bigger value than
The boundary value, ∆eb 1.0 that in a binary-coded GA for achieving good search performance.
The number of iterations, K 400 To verify this, preliminary experiments are carried out. Fig. 4
The number of particles, P 10 illustrates the resulting mean and standard deviation of fitness
values for RGA/E with different probabilities, pc and pm , of
with axis symmetry, the Rosenbrock function is unimodal with crossover and mutation. Fig. 4 indicates that the fitness is not much
asymmetry, the Schwefel function is unimodal with line symmetry, affected by the probabilities of crossover and mutation with the
the Griewank and Rastrigin functions are multimodal with different exception of the Rastrigin and Schwefel problems. In Rastrigin and
distribution density and axis symmetry, and the Hybrid composition Schwefel problems, the fitness value is small and its variants are
function is multimodal with different distribution density and line large for small probabilities of crossover and mutation. Based on
symmetry. the above results, the probability of crossover and mutation in
RGA/E is set to be 0.5 in our experiments.
5.2. Experimental setting
5.3. Experimental results
The initial position of particles is uniformly distributed in the
search space, S, and the initial velocity is set to zero. Table 1 gives
Computer experiments for optimizing CPSO models by evolu-
the major parameters in the ECPSO.
tionary computation are carried out for the suite of 5-, 10-, and
Note that both non-redundant search and roulette wheel
20-dimensional benchmark problems. It is to be noted that the ap-
selection in genetic operations have no parameter. The smaller
propriate values of parameters in the CPSO are estimated with a
number of individuals, particles and iterations are chosen in order
to acquire a balance between accuracy and computation speed. non-negative constraint.
By repeatedly taking the average of the results, the accuracy of Fig. 5 illustrates the cumulative fitness function of the six best
optimization is guaranteed. individuals for the 5-dimensional Sphere problem. Fig. 5 indicates
It is well known that crossover and mutation contribute that the fitness value abruptly changes except for the two best
to retaining the diversity of individuals during evolutionary individuals.
computation. The probabilities, pc and pm , of crossover and The abrupt change of fitness values is considered to be mainly
mutation are usually set to a small value such as 0.05, because caused by the high probability of crossover and mutation, and by
a larger probability of crossover and mutation is considered to the non-redundant search. On the other hand, the fitness values
destroy most of the sound chromosomes and lead to disastrous of the two best individuals increase monotonically with elitism
solutions. strategy.
However, there is a big difference between a binary-coded Based on the resulting values of parameters, the optimized PSO
GA and a real-coded GA in genetic operation for optimization models can be divided into two types (a-type and b-type). Table 2
with real-valued variables. In a binary-coded GA, the crossover shows the resulting values of parameters in CPSO models and their
514 H. Zhang, M. Ishikawa / Neural Networks 23 (2010) 510–516

a b

c d

e f

g h

Fig. 4. The search performance of RGA/E with setting the probability of crossover and mutation. (a) Sphere problem, (b) Griewank problem, (c) Rastrigin problem,
(d) Rosenbrock problem, (e) Schwefel problem, (f) Hybrid composition problem, (g) Rastrigin problem (setting pm with pc = 0.1), (h) Rastrigin problem (setting pm with
pc = 0.9).

frequencies in 20 trials. 2 It indicates that the frequency of the b- • The resulting value of the constriction coefficient, χ , is always
type is not smaller than that of the a-type. Table 2 provides the less than 1. This suggests that the ‘‘constriction’’ term is
following findings. effective in improving the convergence of the CPSO.
• There are two types of optimized CPSO model structure. The
In addition, it is a fact that the temporary condition of φ = c1 +
optimized values of parameters are not unique and are subject
to variation. c2 6> 4 in Eq. (4) is satisfied in many cases. This suggests that the
• The resulting value of parameter c2 never becomes zero for any parameter selection under a conventional model is very significant.
problem. This suggests that the swarm confidence factor, c2 ,
plays an essential role in the CPSO. 5.4. Performance comparison

2 Computing environment: Intel


Table 3 gives the resulting average fitness values and standard
r
XeonTM ; CPU 3.40 GHz; Memory 2.00 GB RAM;
Computing tool: Mathematica 5.2; Computing time: about 9.8 min (in the 5-
deviation for the CPSO, ECPSO, OPSO, RGA/E, and EPSO for the given
dimensional case), 18.4 min (in the 10-dimensional case), and 55.1 min (in the 20- benchmark problems. The search performance of the ECPSO is
dimensional case). superior to that of the original CPSO except for the Sphere problem.
H. Zhang, M. Ishikawa / Neural Networks 23 (2010) 510–516 515

Table 2
The resulting values of parameters in CPSO models and their frequency for benchmark problems (20 trials in each case).
Problem Dim. Optimized CPSO Values of parameters Frequency (%)
χ c1 c2

a-type 0.6887 ± 0.0000 0 1.8010 ± 0.0000 5


5
b-type 0.6528 ± 0.1760 4.3175 ± 2.7298 2.1319 ± 1.1178 95
Sphere 10 b-type 0.7668 ± 0.0590 2.0517 ± 0.6844 1.9260 ± 0.4301 100
a-type 0.8456 ± 0.0001 0 0.7511 ± 0.0001 30
20
b-type 0.8456 ± 0.0000 2.1750 ± 0.9541 1.2416 ± 0.2536 70
5 b-type 0.7354 ± 0.1723 1.6764 ± 1.3422 1.1250 ± 0.7046 100
Griewank 10 b-type 0.8925 ± 0.0678 1.6085 ± 0.5662 0.4986 ± 0.2150 100
20 b-type 0.9591 ± 0.7140 2.6972 ± 0.8136 0.5717 ± 0.3967 100
a-type 0.7165 ± 0.0000 0 0.8673 ± 0.0000 5
5
b-type 0.8158 ± 0.2396 1.6761 ± 1.1867 1.5339 ± 0.7950 95
a-type 0.9249 ± 0.0300 0 0.3629 ± 0.1912 25
Rastrigin 10
b-type 0.8738 ± 0.0479 0.5563 ± 0.4863 0.4424 ± 0.1841 75
a-type 0.9151 ± 0.0000 0 1.2400 ± 0.0000 5
20
b-type 0.5646 ± 0.1527 6.800 ± 2.5389 1.5910 ± 0.2155 95
a-type 0.5902 ± 0.0000 0 3.7107 ± 0.5967 10
5
b-type 0.9397 ± 0.1333 0.9694 ± 0.4533 1.7525 ± 0.4529 90
Rosenbrock
10 b-type 0.8689 ± 0.0457 0.9927 ± 0.6309 1.1325 ± 0.4529 100
20 b-type 0.9247 ± 0.0000 1.9832 ± 0.3718 0.7552 ± 0.2529 100
5 b-type 0.8139 ± 0.0853 1.8770 ± 0.7563 0.6377 ± 0.1597 100
Schwefel 10 b-type 0.8444 ± 0.0398 1.5419 ± 0.7225 0.9923 ± 0.4073 100
20 b-type 0.9320 ± 0.0241 0.4519 ± 0.4377 0.4506 ± 0.2318 100
5 b-type 0.9004 ± 0.0763 2.7273 ± 0.5100 1.7055 ± 0.8899 100
Hybrid 10 b-type 0.9777 ± 0.0000 0.0787 ± 0.0086 0.1407 ± 0.0439 100
20 b-type 0.6019 ± 0.0550 6.2342 ± 1.0664 0.7327 ± 0.1214 100

Table 3
The average fitness values and standard deviations for the original CPSO, ECPSO, OPSO, EPSO, and RGA/E for each benchmark problem (mean and standard deviation over 20
trials; the number of iterations, K = 400). The average with bold fonts is the best among the five methods.
Problem Dim. Original CPSO ECPSO OPSO RGA/E EPSO

5 1.0000 ± 0.0000 1.0000 ± 0.0000 1.0000 ± 0.0000 0.9990 ± 0.0005 1.0000 ± 0.0000
Sphere 10 0.9997 ± 0.0002 0.9998 ± 0.0002 0.9980 ± 0.0077 0.9957 ± 0.0028 0.9998 ± 0.0003
20 0.7357 ± 0.2940 0.9797 ± 0.0233 0.6939 ± 0.3131 0.9207 ± 0.0290 0.9755 ± 0.0406
5 0.8688 ± 0.0916 0.9901 ± 0.0050 0.9448 ± 0.0439 0.9452 ± 0.0784 0.9876 ± 0.0104
Griewank 10 0.9814 ± 0.0143 0.9924 ± 0.0068 0.8236 ± 0.1835 0.9136 ± 0.1415 0.9706 ± 0.0243
20 0.9688 ± 0.0275 0.9954 ± 0.0046 0.8073 ± 0.1742 0.8816 ± 0.1471 0.9731 ± 0.1856
5 0.1828 ± 0.1154 0.2710 ± 0.1300 0.2652 ± 0.1185 0.9064 ± 0.2256 1.0000 ± 0.0000
Rastrigin 10 0.0310 ± 0.0090 0.0331 ± 0.0110 0.0321 ± 0.0255 0.6693 ± 0.2061 0.0429 ± 0.0256
20 0.0105 ± 0.0026 0.0172 ± 0.0051 0.0147 ± 0.0033 0.0844 ± 0.0292 0.0174 ± 0.0064
5 0.6206 ± 0.2583 0.7008 ± 0.3557 0.3926 ± 0.1976 0.3898 ± 0.2273 0.7564 ± 0.2723
Rosenbrock 10 0.0788 ± 0.0992 0.1262 ± 0.1508 0.0825 ± 0.0719 0.1243 ± 0.0650 0.1149 ± 0.0780
20 0.0114 ± 0.0145 0.0270 ± 0.0267 0.0084 ± 0.0108 0.0108 ± 0.0082 0.0271 ± 0.0154
5 0.9954 ± 0.0103 0.9965 ± 0.0086 0.7677 ± 0.4127 0.9875 ± 0.2145 1.0000 ± 0.0000
Schwefel 10 0.7895 ± 0.2912 0.8528 ± 0.1780 0.4936 ± 0.4470 0.7028 ± 0.1339 0.8592 ± 0.3397
20 0.0339 ± 0.0221 0.1378 ± 0.1172 0.0287 ± 0.0214 0.0923 ± 0.0405 0.1343 ± 0.1294
5 0.1348 ± 0.2084 0.3247 ± 0.3253 0.3061 ± 0.3591 0.1531 ± 0.1339 0.3298 ± 0.3384
Hybrid 10 0.0197 ± 0.0173 0.0318 ± 0.0230 0.0281 ± 0.0168 0.0126 ± 0.0144 0.0293 ± 0.0135
20 0.0032 ± 0.0009 0.0069 ± 0.0022 0.0043 ± 0.0016 0.0028 ± 0.0010 0.0064 ± 0.0024

The experimental results clearly indicate that the parameter Rastrigin problem in the 5-dimensional case. This suggests that the
values, i.e., χ = 0.7298, c1 = c2 = 2.05 in the original CPSO ECPSO is almost equivalent to optimized PSO models.
are not appropriate in these problems. The Sphere problem with 5
and 10 dimensions is too simple to demonstrate the effectiveness
6. Conclusions
of the ECPSO. The situation is exactly the same in the comparison
between the ECPSO and the OPSO.
In this paper, we have proposed an evolutionary canonical
Table 3 also indicates that the search performance of the ECPSO
is superior to that of the OPSO for the given benchmark problems particle swarm optimizer, ECPSO, for effectively solving various
except for the Sphere problem. It suggests that in optimization optimization problems. The crucial idea is to optimize the values of
of nonlinear functions with many local peaks and with different parameters in the CPSO by a real-coded genetic algorithm, which
heights, the ECPSO is not so effective as RGA/E. The Hybrid was originally proposed by the authors in extending PSO to EPSO.
composition function has also many local peaks with different Applications of the ECPSO to a suite of multidimensional
heights, but the ECPSO is superior to the Hybrid task. This might benchmark problems well demonstrated the superiority of the
be due to the fact that the above inherent difficulty is ameliorated ECPSO on the average. Our experimental results clearly indicated
by mixing with other less difficult tasks. that (1) the optimized values of parameters in the CPSO is not
Table 3 indicates that there is not much difference of search unique and are subject to variation; and (2) the values of optimized
performance between the ECPSO and the EPSO except for the parameters are different from those often used in the original
516 H. Zhang, M. Ishikawa / Neural Networks 23 (2010) 510–516

Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory.
In Proceedings of the sixth international symposium on micro machine and human
science (pp. 39–43).
Eberhart, R. C., & Shi, Y. (2000). Comparing inertia weights and constriction factors
in particleswarm optimization, In Proceedings of the 2000 IEEE congress on
evolutionary computation. Vol. 1 (pp. 84–88).
Eberhart, R. C., Shi, Y., & Kennedy, J. (2001). Swarm intelligence. CA, USA: Morgan
Kaufmann Publishers.
Eshelman, L. J., & Schaffer, J. D. (1993). Foundations of genetic algorithms: Vol. 2.
Real-coded genetic algorithms and interval-schemata (pp. 187–202). San Mateo:
Morgan Kaufman Publishers.
Janson, S., & Middendorf, M. (2005). A hierarchical particle swarm optimizer and
its adaptive variant. IEEE Transactions on Systems, Man and Cybernetics, Part B,
35(6), 1272–1282.
Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of
the 1995 IEEE international conference on neural networks (pp. 1942–1948).
Kennedy, J., & Mendes, R. (2002). Population structure and particles swarm
performance. In Proceedings of the IEEE congress on evolutionary computations
(pp. 1671–1676).
Kennedy, J. (2006). In search of the essential particle swarm. In Proceedings of the
2006 IEEE congress on evolutionary computations (pp. 6158–6165). BC, Canada:
Vancouver.
Fig. 5. Fitness values of the six best individuals. Lane, J., Engelbrecht, A., & Gain, J. (2008). Particle swarm optimization with
spatially meaningful neighbours. In Proceedings of swarm intelligence symposium
(pp. 1–8).
CPSO. This suggests the importance and necessity of introducing Meissner, M., Schmuker, M., & Schneider, G. (2006). Optimized particle swarm
evolutionary computation into the CPSO. Therefore, it is expected optimization (OPSO) and its application to artificial neural network training.
that the performance of evolutionary neural networks (Dehuri & BMC Bioinformatics, 7(125).
Parsopoulos, K. E., & Vrahatis, M. N. (2002). Recent approaches to global opti-
Cho, 2009; Spina, 2006; Yu, Wang, & Xi, 2008) could be improved mization problems through particle swarm optimization. Natural Computing,
by the ECPSO. 1, 235–306.
Applications of the proposed method to complex real-world Pasupuleti, P., & Battiti, R. (2000). The Gregarious recent particle swarm optimizer
problems are left for future study. Although the proposed method (G-PSO). In Proceedings of the IEEE congress on evolutionary computation
(pp. 84–88).
cannot completely prevent premature convergence and stagnation Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle swarm optimization — An
in optimization, it is still expected that fruitful results will be overview. Swarm Intelligence, 1, 33–57.
obtained by combining a family of PSO methods and evolutionary Reyes-Sierra, M., & Coello, C. A. C. (2006). Multi-objective particle swarm optimiz-
ers: A survey of the state-of-the-art. International Journal of Computational In-
computation. telligence Research, 2(3), 287–308.
Shi, Y., & Eberhart, R. C. (1998). A modified particle swarm optimiser. In Proceedings
Acknowledgements of the IEEE international conference on evolutionary computation (pp. 69–73).
Spina, R. (2006). Optimisation of injection moulded parts by using ANN-PSO
approach. Journal of Achievements in Materials and Manufacturing Engineering,
This research was partially supported by a COE (Center 15(1–2), 146–152.
of Excellence) program (#J19) granted to Kyushu Institute of Stron, R., & Price, K. (1997). Differential evolution — A sample and efficient heuristic
for global optimization over continues space. Journal of Global Optimization, 11,
Technology from the Ministry of Education, Culture, Sports, Science 341–359.
and Technology (MEXT), Japan. It was also supported by Grant-in- Trelea, I. C. (2003). The particle swarm optimization algorithm: Convergence
Aid Scientific Research(C) (18500175) from MEXT, Japan. analysis and parameter selection. Information Processing Letters, 85, 317–325.
Xie, X.-F., Zhang, W.-J., & Yang, Z.-L. (2002). A dissipative particle swarm
optimization. In Proceedings of The IEEE congress on evolutionary computation
References (pp. 1456–1461).
Yu, J.-B., Wang, S.-J., & Xi, L.-F. (2008). Evolving artificial neural networks using an
Abraham, A., Guo, H., & Liu, H. (2006). In Nedjah N., & Mourelle. L. (Eds.), Studies improved PSO and DPSO. Neurocomputing, 71, 1054–1060.
in computational intelligence, Swarm intelligence: Foundations, perspectives and Zhang, H., & Ishikawa, M. (2005). A hybrid real-coded genetic algorithm with local
applications, swarm intelligence systems (SCI) (pp. 3–25). Germany: Springer- search. In Proceedings of the 12th international conference on neural information
Verlag. processing (pp. 732–737).
Beielstein, T., Parsopoulos, K. E., & Vrahatis, M. N. (2002). Tuning PSO parameters Zhang, H., & Ishikawa, M. (2007). Evolutionary particle swarm optimization (EPSO)
through sensitivity analysis. Technical report of the collaborative research center — Estimation of optimal PSO parameters by GA. In Proceedings of the IAENG
531 computational intelligence CI-124/02, University of Dortmund. international multiconference of engineers and computer scientists (pp. 13–18).
Carlisle, A., & Dozier, G. (2001). An off-the-shelf PSO. In Proceedings of the workshop Hong Kong, China: Newswood Limited, Vol. 1.
on particle swarm optimization (pp. 1–6). Zhang, H., & Ishikawa, M. (2008). Designing particle swarm optimization —
Campana, E. F., Fasano, G., Peri, D., & Pinto, A. (2006). Particle swarm optimization: Performance comparison of two temporally cumulative fitness functions in
Efficient globally convergent modifications. In Motasoares C.A., et al., (Eds.), III EPSO. In Proceedings of the 26th IASTED international conference on artificial
European conference on computational mechanics solids, structures and coupled intelligence and applications (pp. 301–306).
problems in engineering (pp. 412–429). Netherlands: Springer, doi:10.1007/ Zhang, H., & Ishikawa, M. (2008). Evolutionary particle swarm optimization —
1-4020-5370-3. Metaoptimization method with GA for estimating optimal PSO methods.
Clerc, M., & Kennedy, J. (2002). The particle swarm-explosion, stability, and In Castillo O., et al., (Eds.), Lecture notes in electrical engineering: Vol. 6. Trends
convergence in a multidimensional complex space. IEEE Transactions on in intelligent systems and computer engineering (pp. 75–90). Springer-Verlag.
Evolutionary Computation, 6(1), 58–73. Zhang, H., & Ishikawa, M. (2008). Evolutionary canonical particle swarm optimizer
Clerc, M. (2006). Particle swarm optimization. UK: Iste Publishing Co. — A proposal of meta-optimization in model selection. In LNCS: Vol. 5163.
Dehuri, S., & Cho, S.-B. (2009). A comprehensive survey on functional link neural Proceedings of the 18th international conference on artificial neural networks, part
networks and an adaptive PSO-BP learning for CFLNN. Journal of Neural 1 (pp. 472–481). Prague, Czech Republic.
Computing & Applications, 196, 578–593. doi:10.1007/s00521-009-0288-5. http://www.ntu.edu.sg/home/epnsugan/index_files/CEC-05/
Springer London. Tech-Report-May-30-05.pdf.

You might also like