Meta-heuristics
Meta-heuristics
net/publication/270750820
Article in International Journal of Advances in Soft Computing and its Applications · March 2013
CITATIONS READS
460 17,055
2 authors:
All content following this page was uploaded by Zahra Beheshti on 12 January 2015.
Abstract
Exact optimization algorithms are not able to provide an appropriate solution
in solving optimization problems with a high-dimensional search space. In these
problems, the search space grows exponentially with the problem size therefore;
exhaustive search is not practical. Also, classical approximate optimization
methods like greedy-based algorithms make several assumptions to solve the
problems. Sometimes, the validation of these assumptions is difficult in each
problem. Hence, meta-heuristic algorithms which make few or no assumptions
about a problem and can search very large spaces of candidate solutions have
been extensively developed to solve optimization problems these days. Among
these algorithms, population-based meta-heuristic algorithms are proper for
global searches due to global exploration and local exploitation ability. In this
paper, a survey on meta-heuristic algorithms is performed and several
population-based meta-heuristics in continuous (real) and discrete (binary)
search spaces are explained in details. This covers design, main algorithm,
advantages and disadvantages of the algorithms.
1 Introduction
In the past decades, many optimization algorithms, including exact and
approximate algorithms, have been proposed to address optimization problems. In
the class of exact optimization algorithms, the design and implementation of
algorithms are usually based on methods such as dynamic programming,
backtracking and branch-and-bound methods [1]. However, these algorithms have
a good performance in many problems [1, 2, 3, 4, 5], they are not efficient in
solving larger scale combinatorial and highly non-linear optimization problems.
Due to the fact that the search space increases exponentially with the problem size
and exhaustive search is impractical in these problems. Also, traditional
approximate methods like greedy algorithms usually require making several
assumptions which might not be easy to validate in many situations [1]. Therefore,
a set of more adaptable and flexible algorithms are required to overcome these
limitations. Based on this motivation, several algorithms usually inspired by
natural phenomena have been proposed in the literature. Among them, some
meta-heuristic search algorithms with population-based framework have shown
satisfactory capabilities to handle high dimension optimization problems.
Artificial Immune System (AIS) [6], Genetic Algorithm (GA) [7], Ant Colony
Optimization (ACO) [8], Particle Swarm Optimization (PSO) [9, 10], Stochastic
Diffusion Search (SDS) [11], Artificial Bee Colony (ABC) [12], Intelligent Water
Drops (IWD) [13], River Formation Dynamics (RFD) [14], Gravitational Search
Algorithm (GSA) [5] and Charged System Search (CSS) [16] are in the class of
such algorithms. These algorithms and their improved scheme have shown a good
performance in the wide range of problems such as neural network training [17,
18], pattern recognition [19, 20], function optimization [21, 22], image processing
[23, 24], data mining [25, 26], combinatorial optimization problems [27, 28] and
so on.
2 Concept of meta-heuristic
The words of “meta” and ‘‘heuristic” are Greek where, “meta” is “higher level” or
“beyond” and heuristics means ‘‘to find”, ‘‘to know”, ‘‘to guide an investigation”
or ‘‘to discover” [29]. Heuristics [30] are methods to find good (near-) optimal
3 A Review of Population-based Meta-Heuristic
According to Voss et al. [32], a meta-heuristic is: “an iterative master process that
guides and modifies the operations of subordinate heuristics to efficiently produce
high-quality solutions. It may manipulate a complete (or incomplete) single
solution or a collection of solutions per iteration. The subordinate heuristics may
be high (or low) level procedures, or a simple local search, or just a construction
method.”
To realize the concepts of exploration and exploitation, particles pass three phases
inspired from nature in each step, namely, self-adaptation, cooperation and
competition. In the self-adaptation phase, each particle enhances its performance.
Particles collaborate together by transferring information in the cooperation phase
and finally, they compete to survive in the competition phase. These concepts
direct an algorithm toward finding the best solution. [37].
4 Related works
Many meta-heuristic algorithms have been proposed so far as shown in Table 1.
Genetic Algorithm (GA) as a population-based meta-heuristic algorithm was
suggested by Holland [42]. In the algorithm, a population of strings called
chromosomes encodes candidate solutions for optimization problems.
Farmer et al. [6] introduced Artificial Immune System (AIS) simulated the
structure and function of biological immune system to solve problems. The
immune system defends the body against foreign or dangerous cells or substances
which invade it. In fact, AIS copies the method which the human body acquires
immunity using vaccination against diseases. The AIS applies the VACCINE-AIS
algorithm for identifying the best solution from a given number of anti-bodies and
antigens. In AIS, the decision points and solutions are anti-bodies and antigens in
the immune system which are employed to solve optimization problems.
search space. In the algorithm, particles are evaluated by their fitness values. They
move towards those particles which have better fitness values and finally obtain
the best solution.
Ant Colony Optimization (ACO) [8] algorithm models the behavior of ants
foraging and is useful for problems which require finding the shortest path as a
goal. In real world, when ants explore their environment, it lays down the
pheromones to direct each other toward resources. ACO also simulates this
method and each ant records similarly its position so that more ants locate better
solutions in later iterations. This trend continues until the best path is found.
Also, Charged System Search (CSS) [16] was developed based on the Newtonian
laws from mechanics and the governing Coulomb and Gauss laws from
electrostatics. The algorithm is multi-agent and each agent is called Charged
Particle (CP). Each CP has an electric force and affects other CPs according to the
Coulomb and Gauss laws. This force gives acceleration to CP and the velocity of
each CP changes with time. The motion laws and the resultant forces determine
the new position (new solution) of the CPs. These positions are evaluated and
replaced with previous ones if these positions are better. The trend continues until
the maximum number of iterations achieves and obtains the best result to that
extent.
fi
pi N
, (1)
j 1 f j
The main idea of Genitor or steady state selection is to survive the big part of
chromosomes in next generation. Therefore, a few chromosomes with high fitness
are chosen for creating a new offspring in each generation. Then, some
chromosomes with low fitness are deleted and the new offspring is replaced in
their place. The rest of population survives to generate new generation.
Elitism is a method which copies the best chromosome (or a few best
chromosomes) to new population and the rest is created by the mentioned
methods. Therefore, this method increases the performance of GA, because it
avoids losing the best-found solution [48]. The above steps have been illustrated
in Fig. 5.
Although GA has been extensively used in various problems, it suffers from some
disadvantages [49, 50, 51] such as:
Fig. 4 GA crossover
Step 1: Start.
Step 2: Create first generation of chromosomes.
Step 3: Define Parameters and fitness function.
Step 4: Calculate the fitness of each individual chromosome.
Step 5: Choose the chromosomes by Elitism method.
Step 6: Select a pair of chromosomes as parents.
Step 7: Perform Crossover and Mutation to generate new chromosomes.
Step 8: Combine the new chromosomes and the chromosomes of Elitism Set in
the new population (the next generation).
Step 9: Repeat Step 4 to Step 8 until reaching termination criteria.
Step 10: Return best solution.
X i xi1, xi 2 ,..., xid for i 1,2,...,N . (2)
Vi vi1, vi 2 ,..., vid for i 1,2,...,N . (3)
Pi is the personal best position found by the ith particle:
Pi pi1, pi 2 ,..., pid for i 1,2,...,N . (4)
Also, the best position achieved by the entire population ( Pg ) is computed to
update the particle velocity:
Pg p g1, p g 2 ,..., p gd . (5)
From Pi and Pg , the next velocity and position of ith particle are updated by Eq.
(6) and Eq. (7):
vid t 1 wt vid t C1 rand pid t xid t C 2 rand p gd t xid t , (6)
xid t 1 xid t vid t 1 , (7)
Where, vid (t+1) and vid (t) are the next and current velocity of ith particle
respectively. w is inertia weight, C1 and C2 are acceleration coefficients, rand is
uniformly random number in the interval of [0, 1] and N is the number of particles.
xid (t+1) and xid (t) show the next and current position of ith particle.
In Eq. (6), the second and the third term are called cognition and social term
respectively. Also, |vid|<vmax is considered and vmax is set to a constant based
Z. Beheshti et al. 14
In PSO algorithm, two models for choosing Pg are considered known as gbest (or
global topology) and lbest (or local topology) models. In global model, the
position of each particle is influenced by the best-fitness particle of entire
population in the search space whereas in the local model, each particle is affected
by the best-fitness particle chosen from its neighborhood. According to Bratton
and Kennedy [53], the lbest model can return better results than the gbest model
in many problems however; it might have lower convergence rate than gbest
model. The steps of PSO are shown in the Fig. 6.
Although PSO is easy to implement, it may face up to the slow convergence rate,
parameter selection problem and easily get trapped in a local optimum due to its
poor exploration when solving complex multimodal problems [54, 55, 56]. If a
particle falls into a local optimum, sometimes it cannot get rid of itself from the
position. In other words, if Pg obtained by the population is a local optimum and
the current position and the personal best position of particle i are in the local
optimum, the second and third term of Eq. (6) tend toward zero, also w is linearly
decreasing to near zero. Consequently, the next velocity of particle i tends toward
zero and its next position in Eq. (7) cannot change and the particle remains in the
local optimum. Therefore, variant PSO algorithms have been proposed to improve
the performance of PSO and to overcome these limitations.
Step 1: Start.
Step 2: Initialize the velocities and positions of population randomly.
Step 3: Evaluate fitness values of particles.
Step 4: Update Pi if particle fitness value iis better than particle best fitness value
i , for i = 1, .. ., N.
Step 5: Update Pg if particle fitness value i is better than global best fitness value,
for i = 1, . . ., N.
Step 6: Update the next velocity of particles.
Step 7: Update the next position of particles.
Step 8: Repeat steps 3 to 7 until the stop criterion is reached.
Step 9: Return best solution.
PSO is one of the most popular optimizer which has been widely applied in
solving optimization problem. Hence, the enhancement of performance and
theoretical studies of the algorithm have become attractive. Convergence analysis
and stability studies have been reported in [57, 58, 59, 60, 61]. Also, some
research on the performance of PSO has been done in terms of topological
structures, parameter studies and combination with auxiliary operations [62, 63].
In parameter studies, much research has been done on inertia weight w and
acceleration coefficients C1 and C2. Shi and Eberhart [10, 52] suggested that the
inertia weight w in Eq. (6) is linearly decreased by the iterative generations as Eq.
(8):
iter
w wmax ( wmax wmin ) (8)
Maxiter
Where, iter is the current generation and Maxiter is maximum generations. The
wmax and wmin are usually set to 0.9 and 0.4 respectively. Moreover, other values
of w have been proposed to improve the searching ability of PSO. A fuzzy
adaptive w was introduced and a random version setting w to
0.5 random(0,1) / 2 was experimented for dynamic system optimization [68, 69].
Also, a constriction factor [57] was introduced based on Eq. (9) and the next
velocity was computed according to Eq. (11):
Z. Beheshti et al. 16
2
, (9)
2 2 4
C1 C2 4.1 , (10)
vid t 1 vid t C1 rand pid t xid t C 2 rand p gd t xid t , (11)
The experiment results have illustrated that both acceleration coefficients C1 and
C2 are essential to the success of PSO. Kennedy and Eberhart [9] offered a fixed
value of 2.0, and this configuration has been adopted by many other researchers.
Ratnaweera et al. [71] proposed self-organizing Hierarchical Particle Swarm
Optimizer with Time-Varying Acceleration Coefficients (HPSO-TVAC)
algorithm which used linearly time-varying acceleration coefficients, where a
larger C1 and a smaller C2 were set at the beginning and were gradually reversed
during the search. Therefore, particles allow moving around the search space
instead of moving toward the population best at the beginning. The w in HPSO-
TVAC is used as Eq. (8) and C1 and C2 are computed as:
iter
C j (C jf C ji ) C ji , j 1,2 (12)
Maxiter
Where, Cjf and Cji are the final and initial values of acceleration coefficients
which are changed from 2.5 to 0.5 for C1 and .from 0.5 to 2.5 for C2.
vid t 1 w(t )vid (t ) C rand p f i( d ),d t xid t , (13)
different values for different particles. For each dimension of particle i, a random
number is generated. Hence, a tournament selection procedure has been suggested
to choose randomly two particles and then select one with the best fitness as the
exemplar to learn from for that dimension. CLPSO has only one acceleration
coefficient C which is normally set to 1.494 and the inertia weight value is
changed from 0.9 to 0.4.
An Adaptive PSO (APSO) was proposed by Zhan et al. [55]. In this algorithm, an
evolutionary factor f is defined and computed with a fuzzy classification method
to design effective parameter and to improve the speed of solving optimization
problems. Hence, w changes based on a sigmoid mapping w(f) as shown in Eq.
(14). The large f will benefit for the global search and convergence state is
detected by small f.
1
w( f ) [0.4,0.9] . (14)
1 1.5e 2.6 f
f [0,1]
C i t 1 C i t i 1,2 , (15)
Where, is termed acceleration rate in interval [3.0, 4.0]. If the sum of C1 and C2
is larger than 4.0, then both C1 and C2 are normalized to:
Ci
Ci 4.0, i 1,2 (16)
C1 C 2
Another term in APSO is an Elitist Learning Strategy (ELS) to help Pg for
jumping out of local optimal regions when the search is identified to be in a
convergence state. If another better region is found for Pg , then the rest of the
swarm will follow to jump out and converge to the new region.
Although many extended PSO algorithms have been presented so far, the
performance enhancement of PSO is an open problem because of its simple
structure of PSO and easy to use.
To divide the colonies among the imperialists, the cost and power of each
imperialist are normalized and initial colonies are allocated to the empires as Eq.
(17) to Eq. (19):
pn
Cn , (18)
N imp
Ci
i 1
N .C n round p n . N col, (19)
19 A Review of Population-based Meta-Heuristic
Where, cn is the cost of the nth imperialist and Cn and pn are the normalized cost
and power of the imperialist respectively. Also, N.Cn is the initial number of
colonies related to the nth empire and Ncol is the total number of initial colonies.
To form the nth empire, the N.Cn of the colonies are randomly selected and
allocated to the nth imperialist. In real world, the imperialist states try to make
their colonies as part of themselves. This process, called assimilation, is modelled
by moving all of the colonies toward the imperialist as illustrated in the Fig. 7. In
this Figure, θ and x are random angle and number with uniform distribution also;
d is the distance between the imperialist and colony.
x ~ U 0, d , (20)
~ U , , (21)
Finally, empires compete together in order to possess and control other empires’
colonies as shown in Fig. 8. A colony of the weakest empire is selected and the
possession probability of each empire, Pp, is computed as Eq. (23). The
Z. Beheshti et al. 20
normalized total cost of an empire, N.T.Cn,, is acquired by Eq. (22) and used to
obtain the empire possession probability.
N .T . C n T . C n max T .C i . (22)
i
N .T .C n
pp . (23)
n N imp
N .T .C i
i 1
P p p , p p , p p ,..., p p . (24)
1 2 3 N imp
Vector R with the same size as P whose elements are uniformly distributed
random numbers is created:
R r1, r 2 , r 3 ,..., r N imp . (25)
r1, r 2 , r 3 ,..., r N imp U 0,1
21 A Review of Population-based Meta-Heuristic
D P R D1, D 2 , D3 ,..., D N imp . (26)
The competition affects the power of empires and an empire power will be weaker
or stronger. Therefore, all colonies of the weak empire are owned by more
powerful empires and the weaker one is eliminated. The total power (cost) of an
empire is modelled by adding the power of imperialist country (cost) and a
percentage of mean power of its colonies (colonies costs) as follows:
T .C n Cost imprialist n mean Cost colonies of
empiren , (27)
Where, T.Cn is the total cost of the nth empire and ξ is a positive small number.
Step 1: Start.
Step 2: Select some random points on the function and initialize the empires.
Step 3: Move the colonies toward their relevant imperialist (Assimilation).
Step 4: Randomly change the position of some colonies (Revolution).
Step 5: If there is a colony in an empire which has lower cost than the
imperialist, exchange the positions of that colony and the imperialist.
Step 6: Unite the similar empires.
Step 7: Compute the total cost of all empires.
Step 8: Pick the weakest colony (colonies) from the weakest empires and give it
(them) to one of the empires (Imperialistic competition).
Step 9: Eliminate the powerless empires.
Step 10:If stop conditions satisfied, stop, if not go to Step 3.
Although ICA has shown good performance in many problems [75, 76, 77], it
faces some drawbacks which cause the use of the algorithm to be difficult:
Z. Beheshti et al. 22
X i xi1, xi 2 ,..., xid for i 1,2,...,N . (28)
Where, xid presents the position of ith agent in the dth dimension.
At the beginning, these positions are initialized randomly. Then, the gravity force
of mass j on mass i at specific time t is computed as follows:
M i t M j t
F ij , d t Gt
x jd t xid t , (29)
Rij t
Where, Mi and Mj are the masses of agent i and agent j respectively. is a small
constant. Rij (t) is the Euclidean distance between probe i and j at time t:
Rij t xi t , x j t . (30)
2
Also, G(t) is gravitational constant initialized at the beginning and will be reduced
with time t to control the search accuracy. G is a function with the initial value of
G0:
fit i t worstt
mi t , (32)
best t worstt
mi t
M i t N , (33)
j 1 m j t
Where, fiti (t) is the fitness value of agent i at time t. Also, best(t) and worst(t) are
the best and the worst values of fitness functions at time t.
F id t N
jkbest, j i rand j ij , d t
F , (34)
Where, randj is a random number in the range of [0, 1]. Kbest is the set of first K
agents with the best fitness value and biggest mass. At the beginning, Kbest is
initialized by K0 and linearly reduced during the running time of algorithm.
Regarding the motion law, the force gives acceleration to the agent i:
F id t
aid t . (35)
M i t
This acceleration moves the agent from a position to another position. Hence, the
next velocity of agent i in dimension d is computed as the sum of its current
velocity and its acceleration:
Where, vid (t+1) and xid (t) are the next velocity and the current position of agent
i in dimension d.
Fig. 10 shows the pseudo code of GSA. As seen, algorithm is initialized randomly
and each agent is evaluated based on its fitness value. After computing the total
force and acceleration, the velocity and position each agent are updated. These
steps will be continued until stopping criteria is met and the best solution is
returned by the algorithm.
Z. Beheshti et al. 24
Similar to other meta-heuristic algorithms, GSA also has some weaknesses such
as having complex operators and taking long computational time.
Step 1: Start.
Step 2: Randomized initialization.
Step 3: Fitness evaluation of agents.
Step 4: Update G(t), best(t), worst(t) and Mi(t) for i = 1,2,...,N.
Step 5: Calculation of the total force in different directions.
Step 6: Calculation of acceleration and velocity.
Step 7: Updating agents’ position.
Step 8: Repeat steps 3 to 7 until the stop criterion is reached.
Step 9: Return best solution.
Each dimension has only a binary value of ‘0’ or ‘1’. Therefore, the particle
velocity can be considered by the number of bits changed per iteration, or the
Hamming distance between the particle at time t and t+l. Consequently, updating
the particle position happens when a switching between ‘0’ and ‘1’ values occurs.
Based on the mentioned rules, the binary versions of PSO and GSA have been
proposed by Kennedy and Eberhart [79] and Rashedi et al. [80] respectively. In
the next subsections, an overview of these algorithms is briefly represented to
provide an appropriate background of binary search space.
S vid t 1
1
, (38)
1 e vid t 1
25 A Review of Population-based Meta-Heuristic
Where, Vi vi1, vi 2 ,..., vid and X i xi1, xi 2 ,..., xid are the velocity and
position of ith particle respectively, and N is the number of particles. For better
convergence, |vid|<vmax is considered with vmax = 6.
Although BPSO has been applied in different discrete problems [81, 82, 83], its
convergence rate is not good in many applications [84].
t
Gt G0 1 , (40)
T
Where, G0 is a constant value, t and T are current iteration and the total number of
iteration respectively.
Moreover, the value of position is switched between ‘0’ and ‘1’ based on the
velocity. Therefore, a probability function is defined to map the value of velocity
to the range of [0, 1]. In other words, BGSA modifies the velocity according to Eq.
(36) and the new position changes to either ‘0’ or ‘1’ based on the given
probability of function as demonstrated in Eq.(41) and Eq. (42):
6 Discussion
Exact optimization algorithms are not efficient in solving large scale
combinatorial and multimodal problems. In these problems, exhaustive search for
the algorithms is impractical since the search space develops exponentially with
the problem size. Hence, many researchers have applied meta-heuristic algorithms
to solve the problems. These algorithms have many advantages to name a few:
1. They are robust and can adapt solutions with changing conditions and
environment.
2. They can be applied in solving complex multimodal problems.
3. They may incorporate mechanisms to avoid getting trapped in local
optima.
4. They are not problem-specific algorithm.
5. These algorithms are able to find promising regions in a reasonable
time due to exploration and exploitation ability.
6. They can be easily employed in parallel processing.
2. Compatible in different
problems
7 Conclusion
In this paper, we have presented and compared most important meta-heuristic
algorithms. In the algorithms, several techniques have been considered to improve
the performance of meta-heuristics. Meanwhile, none of meta-heuristic algorithms
are able to present a higher performance than others in solving all problems. Also,
existing algorithms suffer from some drawbacks such as slow convergences rate,
trapping into local optima, having complex operators, long computational time,
need to tune many parameters and design for only real or binary search space.
Hence, proposing new meta-heuristic algorithms to minimizing the disadvantages
is an open problem.
ACKNOWLEDGEMENTS
The authors would like to thank Soft Computing Research Group (SCRG),
Universiti Teknologi Malaysia (UTM), Johor Bahru Malaysia, for the support in
making this study a success.
References
[1] Neapolitan, R., Naimipour K., Foundations of Algorithms using C++ Pseudo
code (3rd ed.), Jones and Bartlett, (2004).
[2] Jansson, C., Knoppel, O., “A branch and bound algorithm for bound
constrained optimization problems without derivatives”, Journal of Global
Optimization, Vol. 7, (1995), pp. 297-331.
[3] Toroslu, I. H., Cosar, A. “Dynamic programming solution for multiple query
optimization problem”, Information Processing Letters, Vol. 92, (2004), pp.
149–155.
[4] Balev, S., Yanev, N., Fréville, A., Andonov, R., “A dynamic programming
based reduction procedure for the multidimensional 0–1 knapsack problem”,
European Journal of Operations Research, Vol. 186, No. 1, (2008), pp. 63-76.
[5] Marti, R., Gallego, M., Duarte, A., “A branch and bound algorithm for the
maximum diversity problem”, European Journal of Operations Research, Vol.
200, (2010), pp. 36–44.
[6] Farmer, J. D., Packard, N. H., Perelson, A. S., “The immune system,
adaptation and machine learning”, Physica D, Vol. 2, (1986), pp. 187–204.
29 A Review of Population-based Meta-Heuristic
[7] Tang, K. S., Man, K. F., Kwong, S., He, Q., “Genetic algorithms and their
applications”, IEEE Signal Processing Magazine, Vol. 13, No. 6, (1996), pp.
22–37.
[8] Dorigo, M., Maniezzo, V., Colorni, A., “The ant system: optimization by a
colony of cooperating agents”, IEEE Transactions on Systems, Man, and
Cybernetics–Part B, Vol. 26, No. 1, (1996), pp. 29–41.
[9] Kennedy, J., Eberhart, R., “Particle swarm optimization”, Proceedings of
IEEE International Conference on Neural Networks, (1995), pp. 1942–1948.
[10] Shi, Y., Eberhart, R., “A modified particle swarm optimizer”, Proceedings of
IEEE International Conference on Evolutionary Computation, (1998), pp. 69–
73.
[11] Bishop, J. M., “Stochastic searching network”, Proceedings of 1st IEE
Conference on Artificial Neural Networks, (1989), pp. 329–331.
[12] Karaboga, D., “An idea based on honey bee swarm for numerical
optimization”, Technical Report, TR06, (2005).
[13] Shah-Hosseini, H., “Problem solving by intelligent water drops”,
Proceedings of IEEE Congress on Evolutionary Computation, (2007), pp.
3226–3231.
[14] Rabanal, P., Rodríguez, I., Rubio, F., “Using river formation dynamics to
design heuristic algorithms”, In AKL et al. (Eds.) Unconventional
Computation, (2007), pp. 163–177, Springer-Verlag.
[15] Rashedi, E., Nezamabadi, S., Saryazdi, S., “GSA: a gravitational search
algorithm”, Information Sciences, Vol. 179, No. 13, (2009), pp. 2232– 2248.
[16] Kaveh, A., Talatahari, S., “A novel heuristic optimization method: charged
system search”, ActaMechanica, Vol. 213, (2010), pp. 267–289.
[17] Qasem, S. N., Shamsuddin, S. M., “Memetic Elitist Pareto Differential
Evolution algorithm based Radial Basis Function Networks for classification
problems”, Applied Soft Computing, Vol. 11, No. 8, (2011), pp. 5565–5581.
[18] Qasem, S. N., Shamsuddin, S. M., “Radial basis function network based on
time variant multi-objective particle swarm optimization for medical diseases
diagnosis”, Applied Soft Computing, Vol. 11, No. 1, (2011), pp. 1427–1438.
[19] Senaratne, R., Halgamuge, S., Hsu, A., “Face recognition by extending
elastic bunch graph matching with particle swarm optimization”, Journal of
Multimedia, Vol. 4, No. 4, (2009), pp. 204–214.
[20] Cao, K., Yang, X., Chen, X., Zang, Y. Liang, J., Tian, J., “A novel ant colony
optimization algorithm for large-distorted fingerprint matching”, Pattern
Recognition, Vol. 45, (2012), pp. 151–161.
Z. Beheshti et al. 30
[36] Congram, R. K., Potts, C. N., Van de Velde, S. L., “An iterated dynasearch
algorithm for the single-machine total weighted tardiness scheduling problem”,
INFORMS Journal on Computing, Vol. 14, No. 1, (2002), pp. 52-67.
[37] Glover, F., McMillan, C., “The general employee scheduling problem: an
integration of MS and AI”, Computers & Operations Research, Vol. 13, No. 5,
(1986), pp. 563-573.
[38] Mladenović, N., Hansen, P., “Variable neighborhood search”, Computers &
Operations Research, Vol. 24, No. 11, (1997), pp. 1097–1100.
[39] Bonabeau, E. Dorigo, M., Theraulaz, G., Swarm intelligence: from natural to
artificial systems, Oxford University Press, (1999).
[40] Tripathi, P. K., Bandyopadhyay, S., Pal, S. K., “Multi-Objective Particle
Swarm Optimization with time variant inertia and acceleration coefficients”,
Information Sciences, Vol. 177, (2007), pp. 5033–5049.
[41] Voudouris, C., Tsang, E., “Partial constraint satisfaction problems and guided
local search”, Proceedings of Second International Conference on Practical
Application of Constraint Technology (PACT'96), (1996), pp. 337-356.
[42] Holland, J. H., Adaptation in natural and artificial systems: an introductory
analysis with applications to biology, control, and artificial intelligence,
Michigan, Ann Arbor, University of Michigan Press, 1975.
[43] Kirkpatrick, S., Gelatto, C. D., Vecchi, M. P., “Optimization by simulated
annealing”, Science, Vol. 220, (1983), pp. 671–680.
[44] Glover, F., “Tabu Search - Part 1”, ORSA Journal on Computing, Vol. 1, No.
2, (1989), pp. 190–206.
[45] Glover, F., “Tabu Search - Part 2”, ORSA Journal on Computing, Vol. 2, No.
1, (1990), pp. 4–32.
[46] Atashpaz-Gargari, E., Lucas, C., “Imperialist Competitive Algorithm: An
algorithm for optimization inspired by imperialistic competition”, Proceedings
of IEEE Congress on Evolutionary Computation, (2007), pp. 4661–4667.
[47] Goldberg, D. E., Deb, K., “A comparative analysis of selection schemes used
in genetic algorithms”, Proceedings of the First Workshop on Foundations of
Genetic Algorithms, (1990), pp. 69-93.
[48] Abraham, A., Nedjah, N., Mourelle, L. M., “Evolutionary Computation: from
Genetic Algorithms to Genetic Programming”, Studies in Computational
Intelligence (SCI), Vol. 13, (2006), pp. 1–20.
[49] Leung, Y., Gao, Y., Xu, Z. B., “Degree of population diversity - a
perspective on premature convergence in genetic algorithms and its markov
chain analysis”, IEEE Transaction on Neural Network, Vol. 8, No. 5, (1997),
pp. 1165-1176.
Z. Beheshti et al. 32
[50] Hrstka, O., Kučerová, A., “Improvements of real coded genetic algorithms
based on differential operators preventing premature convergence”, Advances
in Engineering Software, Vol. 35, (2004), pp. 237–246.
[51] Moslemipour, G., Lee, T.S., Rilling, D., “A review of intelligent approaches
for designing dynamic and robust layouts in flexible manufacturing systems”,
International Journal of Advanced Manufacturing Technology, Vol. 60,
(2012), pp. 11–27.
[52] Shi, Y., Eberhart, R. C., “Empirical study of particle swarm optimization”,
Proceedings of IEEE Congress on Evolutionary Computation, (1999), pp.
1945–1950.
[53] Bratton, D. and Kennedy, J., “Defining a standard for particle swarm
optimization”, Proceedings of the 2007 IEEE Swarm Intelligence Symposium,
(2007), pp. 120–127.
[54] Liang, J. J., Qin, A. K., Suganthan, P. N., Baskar, S., “Comprehensive
learning particle swarm optimizer for global optimization of multimodal
functions”, IEEE Transactions on Evolutionary Computation, Vol. 10, No. 3,
(2006), pp. 281–295.
[55] Zhan, Z.-H., Zhang, J., Li, Y., Chung, H.-S., “Adaptive Particle Swarm
Optimization”, IEEE Transactions on Systems, Man, and Cybernetics–Part B,
Vol. 39, No. 6, (2009), pp. 1362-1381.
[56] Gao, W-F., Liu, S-Y., Huang, L-L., “Particle swarm optimization with
chaotic opposition-based population initialization and stochastic search
technique”, Communications in Nonlinear Science and Numerical Simulation,
Vol. 17, No. 11, (2012), pp. 4316–4327.
[57] Clerc, M., Kennedy, J., “The particle swarm-explosion, stability and
convergence in a multidimensional complex space”, IEEE Transactions on
Evolutionary Computation, Vol. 6, No. 1, (2002), pp. 58–73.
[58] Trelea, I. C., “The particle swarm optimization algorithm: Convergence
analysis and parameter selection”, Information Processing Letters, Vol. 85,
No. 6, (2003), pp. 317–325.
[59] Kadirkamanathan, V., Selvarajah, K., Fleming, P. J., “Stability analysis of
the particle dynamics in particle swarm optimizer”, IEEE Transactions on
Evolutionary Computation, Vol. 10, No. 3, (2006), pp. 245–255.
[60] Yasuda, K., Ide, A., Iwasaki, N., “Stability analysis of particle swarm
optimization”, Proceedings of the fifth Metaheuristics International
Conference, (2003), pp. 341–346.
[61] Bergh, F. V., Engelbrecht, A. P., “A study of particle optimization particle
trajectories”, Information Sciences, Vol. 176, No. 8, (2006), pp. 937–971.
33 A Review of Population-based Meta-Heuristic
[74] Zhan, Z.-H., Zhang, J., Li, Y., Shi, Y.-H., “Orthogonal Learning Particle
Swarm Optimization”, IEEE Transactions on Evolutionary Computation, Vol.
15, No. 6, (2011), pp. 832-847.
[75] Biabangard-Oskouyi, A., Atashpaz-Gargari, E., Soltani, N., Lucas, C.,
“Application of imperialist competitive algorithm for materials property
characterization from sharp indentation test”, International Journal of
Engineering Simulation, Vol. 1, No. 3, (2009), pp. 337-355.
[76] Rajabioun, R., Hashemzadeh, F., Atashpaz-Gargari, E., Mesgari, B., Salmasi,
F.R., “Identification of a MIMO evaporator and its decentralized PID
controller tuning using Colonial Competitive Algorithm”, Proceedings of the
17th World Congress, the International Federation of Automatic Control,
(2008), pp. 9952-9957.
[77] Atashpaz-Gargari, E., Hashemzadeh, F., Rajabioun, R., Lucas, C., “Colonial
Competitive Algorithm, a novel approach for PID controller design in MIMO
distillation column process”, International Journal of Intelligent Computing
and Cybernetics, Vol. 1, No. 3, (2008), pp. 337–355.
[78] Schutz, B., Gravity from the ground up, Cambridge University Press, (2003).
[79] Kennedy, J., Eberhart, R. C., “A discrete binary version of the particle swarm
algorithm”, Proceedings of IEEE international conference on computational
cybernetics and simulation, (1997), pp. 4104–4108.
[80] Rashedi, E., Nezamabadi, S., Saryazdi, S., “BGSA: binary gravitational
search algorithm”, Natural Computing, Vol. 9, No. 3, (2010), pp. 727–745.
[81] Kong, M., Tian, P., “Apply the particle swarm optimization to the
multidimensional knapsack problem”, In Rutkowski, L., Tadeusiewicz, R.,
Zadeh, L. A., Zurada, J. M. (Eds.) Artificial Intelligence and Computational
Intelligence, (2006), pp. 1140–1149, Springer-Verlag.
[82] Chuang, L.-Y., Chang, H.-W., Tu, C.-J., Yang, C.-H., “Improved binary PSO
for feature selection using gene expression data”, Computational Biology and
Chemistry, Vol. 32, No. 1, (2008), pp. 29-38.
[83] Mezmaz, M., Melab, N., Kessaci, Y., Lee, Y.-C., Talbi, E.-G. Zomaya, A.Y.,
Tuyttens, D., “A parallel bi-objective hybrid metaheuristic for energy-aware
scheduling for cloud computing systems”, Journal of Parallel and Distributed
Computing, Vol. 71, No. 11, (2011), pp. 1497–1508.
[84] Nezamabadi-pour, H., Rostami Shahrbabaki, M., Maghfoori-Farsangi, M.,
“Binary Particle Swarm Optimization: Challenges and new Solutions”, CSI
Journal on Computer Science and Engineering, in Persian, Vol. 6, No. 1,
(2008), pp. 21-32.
35 A Review of Population-based Meta-Heuristic
[85] Upadhyaya, S. and Setiya, R., “Ant colony optimization: a modified version”,
International Journal of Advances in Soft Computing and Its Applications, Vol.
1, No. 2, (2009), pp. 77-90.
[86] Chen, W.-N., Zhang, J., Chung, H. S. H., Zhong, W.-L., Wu, W.-G., Shi, U.-
H., “A novel set-based particle swarm optimization method for discrete
optimization problems”, IEEE Transactions on Evolutionary Computation,
Vol. 14, No. 2, (2010), pp. 278-300.
[87] Bagher R. M. and Payman, J., “Water delivery optimization program, of
jiroft dam irrigation networks by using genetic algorithm”, International
Journal of Advances in Soft Computing and Its Applications, Vol. 1, No. 2,
(2009), pp. 151-155.
[87] Juang, Y. T., Tung, S.-L., Chiu, H.-C., “Adaptive fuzzy particle swarm
optimization for global optimization of multimodal functions”, Information
Sciences, Vol. 181, (2011), pp. 4539–4549.
[88] Premalatha, K., Natarajan, A. M., “Hybrid PSO and GA Models for
Document Clustering”, International Journal of Advances in Soft Computing
and Its Applications, Vol. 2, No. 3, (2010), pp. 302-320.
[89] Hemanth, D. J., Vijila, C. K. S. and Anitha, J., “Performance Improved PSO
based Modified Counter Propagation Neural Network for Abnormal MR Brain
Image Classification”, International Journal of Advances in Soft Computing
and Its Applications”, Vol. 2, No. 1, (2010), pp. 65-84.
[90] Romeo, F., Sangiovanni-Vincentelli, A., “A theoretical framework for
simulated annealing”, Algorithmica, Vol. 6, (1991), pp. 302–345.
[91] Johnson, D. S., Aragon, C. R., McGeoch, L.A., Schevon, C., “Optimization
by simulated annealing—an experimental evaluation; part 2, graph-coloring
and number partitioning”, Operations Research, Vol. 39, No. 3, (1991), pp.
378-406.
[92] Dorigo, M., Birattari, M., Stützle, T., “Ant colony optimization – artificial
ants as a computational intelligence technique”, IEEE Computational
Intelligence Magazine, (2006), pp. 28-39.
[93] Dorigo, M., Socha, K, “An Introduction to Ant Colony Optimization”, In
Gonzalez, T., F. (Ed.) Approximation Algorithms and Metaheuristics, (2007),
pp. 1-19, CRC Press.
[94] Chan, F. T. S., Tiwari, M. K., Swarm intelligence: focus on ant and particle
swarm optimization, InTech, (2007).