Advanced Optimization by Nature-Inspired Algorithms 1st Ed. 2018 Edition
Advanced Optimization by Nature-Inspired Algorithms 1st Ed. 2018 Edition
OmidBozorg-Haddad Editor
Advanced
Optimization by
Nature-Inspired
Algorithms
Studies in Computational Intelligence
Volume 720
Series editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: [email protected]
About this Series
Advanced Optimization
by Nature-Inspired
Algorithms
123
Editor
Omid Bozorg-Haddad
Department of Irrigation & Reclamation
Engineering
College of Agriculture & Natural Resources,
University of Tehran
Karaj
Iran
Omid Bozorg-Haddad
Preface
From the early 1990s, the introduction of the term Computational Intelligence
(CI) highlighted the potential applicability of this eld. One of the preliminary
applications of the eld was in the realm of optimization. Undoubtedly, the tasks of
design and operation of systems can be approached systematically by the application
of optimization. And while in most real-life problems, including engineering
problems, application of the classical optimization techniques were limited due to the
complex nature of the decision space and numerous variables, and the CI-based
optimization techniques, which imitated the nature as a source of inspiration, have
proven quite useful. Consequently, during the last passing decades, a considerable
number of novel nature-based optimization algorithms have been proposed in the
literature. While most of these algorithms hold considerable promise, a majority
of them are still in their infancy. For such algorithms to bloom and reach their full
potential, they should be implemented in numerous optimization problems, so that not
only their most suitable sets of optimization problems are recognized, but also
adaptive strategies need to be introduced to make them more suitable for wider sets of
optimization problems. For that, this book specically aimed to introduce some
of these potential nature-based algorithms that could be useful for multidisciplinary
students including those in aeronautic engineering, mechanical engineering, indus-
trial engineering, electrical and electronic engineering, chemical engineering, civil
engineering, computer science, applied mathematics, physics, economy, biology, and
social science, and particularly those pursuing postgraduate studies in advanced
subjects. Chapter 1 of the book is a review of the basic principles of optimization and
nature-based optimization algorithms. Chapters 215 are respectively dedicated to
Cat Swarm Optimization (CSO), League Championship Algorithm (LCA),
Anarchies Society Optimization (ASO), Cuckoo Optimization Algorithm (COA),
Teacher-Learning-Based Optimization (TLBO), Flower Pollination Algorithm
(FPA), Krill Herd Algorithm (KHA), Grey Wolf Optimization (GWO), Shark Smell
Optimization (SSO), Ant Lion Optimization (ALO), Gradient Evolution (GE),
Moth-Flame Optimization (MFO), Crow Search Algorithm (CSA), and Dragonfly
Algorithm (DA). The order of the chapters corresponds to the order of chronological
appearance of these algorithms, from earlier algorithms to newly introduced ones.
vii
viii Preface
Each chapter describes a specic algorithm and starts with a brief literature review of
its development and subsequent modication since the time of inception. This is
followed by the presentation of the basic concept on which the algorithm is based and
the steps of the algorithm. Each chapter closes with a pseudocode of the algorithm.
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Babak Zolghadr-Asli, Omid Bozorg-Haddad and Xuefeng Chu
2 Cat Swarm Optimization (CSO) Algorithm. . . . . . . . . . . . . . . . . . . . 9
Mahdi Bahrami, Omid Bozorg-Haddad and Xuefeng Chu
3 League Championship Algorithm (LCA) . . . . . . . . . . . . . . . . . . . . . . 19
Hossein Rezaei, Omid Bozorg-Haddad and Xuefeng Chu
4 Anarchic Society Optimization (ASO) Algorithm . . . . . . . . . . . . . . . 31
Atiyeh Bozorgi, Omid Bozorg-Haddad and Xuefeng Chu
5 Cuckoo Optimization Algorithm (COA) . . . . . . . . . . . . . . . . . . . . . . 39
Saba Jafari, Omid Bozorg-Haddad and Xuefeng Chu
6 Teaching-Learning-Based Optimization (TLBO) Algorithm . . . . . . 51
Parisa Sarzaeim, Omid Bozorg-Haddad and Xuefeng Chu
7 Flower Pollination Algorithm (FPA) . . . . . . . . . . . . . . . . . . . . . . . . . 59
Marzie Azad, Omid Bozorg-Haddad and Xuefeng Chu
8 Krill Herd Algorithm (KHA) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Babak Zolghadr-Asli, Omid Bozorg-Haddad and Xuefeng Chu
9 Grey Wolf Optimization (GWO) Algorithm . . . . . . . . . . . . . . . . . . . 81
Hossein Rezaei, Omid Bozorg-Haddad and Xuefeng Chu
10 Shark Smell Optimization (SSO) Algorithm . . . . . . . . . . . . . . . . . . . 93
Sahar Mohammad-Azari, Omid Bozorg-Haddad and Xuefeng Chu
11 Ant Lion Optimizer (ALO) Algorithm . . . . . . . . . . . . . . . . . . . . . . . . 105
Melika Mani, Omid Bozorg-Haddad and Xuefeng Chu
12 Gradient Evolution (GE) Algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . 117
Mehri Abdi-Dehkordi, Omid Bozorg-Haddad and Xuefeng Chu
ix
x Contents
xi
List of Figures
xiii
xiv List of Figures
xv
Chapter 1
Introduction
1.1 Introduction
Assume that a group of amateur climbers decide to summit the highest mountain in
a previously unknown and hilly territory. Indeed, searching for the summit is not
unlike searching for the optimal solution, for, in fact, the landscape represents the
decision space, while the highest mountain embodies the global optimum. But how
does one even begin to search such a vast area? One initial answer would be to map
out the entire landscape. However, this would be a both time- and
energy-consuming task. Perhaps, an alternative would be to search the area in a
random-based manner. While it would denitely help to save both time and energy,
ultimately as one can expect, it would not be an efcient strategy as well, for there
would be no guarantee to reach the highest mountain. A more intelligent alternative
would be to directly climb up the steepest cliff, assuming that the highest mountain
is more likely to have a steeper slope. Essentially, this strategy represents the core
principle of classical optimization techniques. While efcient in many cases, if the
climbers path would be interrupted by cliffs (discrete decision space), for instance,
this strategy would not be efcient to locate the highest mountain. Additionally, in a
hilly landscape, climbers could deceitfully climb to the top of a mountain which in
essential stands above the neighboring mountains, while in fact, it is not the highest
mountain in the entire area. This problem is known in technical terms as trapping in
local optima.
Alternatively, the climbers could do a random walk in the area, while looking for
some clues. Such hybrid strategies are formed using a combination of random-
based searching and an adaptive strategy, which is usually inspired by nature. In
fact, that is the description of CI-based optimization algorithms. Subsequently, such
searches could be conducted while the group maintains to stick with one another
(perform as an individual climber). The group members can also spread out and
share their gained information with each other, while the searching proceeds further
on. Technically speaking, the former strategy is better known as single-point
optimization technique, while the latter strategy represents population-based opti-
mization algorithms. The single-point strategies are also known as trajectory opti-
mization algorithms, for the optimization process would provide a path that could
lead to the optimum point, in this case, the highest mountain in the area.
Additionally, the climbers, either as one group or separate individuals, could
investigate the area only using the information currently at hand. An alternative
would be to take a record and map out some previously encountered locations. In a
technical term, the second strategy is called memory using algorithms. While such a
strategy is more efcient in most cases, if the population of the climbers increases or
the landscape is vast enough, storing such massive information could potentially
turn into a major problem.
Ultimately, the core principle of all CI-based optimization algorithms, which are
better known as metaheuristic algorithms, is a way of trial and error to produce an
acceptable solution to a complex problem in a reasonably practical time (Yang
2010). While the complexity of the practical optimization problem is in favor of
4 B. Zolghadr-Asli et al.
Despite their novel and ubiquitous nature, implementation of the CI-based opti-
mization algorithms is indeed a relatively new technique, though it is difcult to
pinpoint when the whole story began. Accordingly, Allen Turing was perhaps the
rst person to implement the CI-based optimization algorithms (Yang 2010).
Evidently, during World War II, while trying to break the German-designed
encrypting machine called Enigma, Turing developed an optimization technique
which he later named heuristic search as it could be expected to work most of the
times, while there was no actual guarantee for a successful performance in each
trial. Turing was later recruited to the national physical laboratory, UK, where he
set out his design for the automatic computing engine. Later on, he outlined his
innovative idea of machine intelligence and learning neural networks and evolu-
tionary algorithms in an NPL report on intelligent machinery (Yang 2010).
The CI-based optimization techniques bloomed during the 1960s and 1970s. In
the early 1960s, John Holland and his collaborators at the University of Michigan
developed the genetic algorithms (GA) (Goldberg and Holland 1988). In essence, a
GA is a search method based on the abstraction of Darwins theory of evolution and
natural selection of biological systems, which are represented in the mathematical
operators. Hollands preliminary studies were showing promising results, while he
continued to further develop his technique by introducing novel and efcient agents
to his algorithms which were named crossover and mutation, although his seminal
book summarizing the development of the genetic algorithm was not published
until 1975 (Yang 2010). Hollands work inspired many to further develop and
adopt similar methods in their research, which beneted from a similar basic
principle in numerous and colorful elds. For instance, while Holland and his
collogues were trying to develop their revolutionary method, Ingo Rechenberg and
Hans-Paul Schwefel at the Technical University of Berlin introduced another novel
optimization technique for solving aerospace engineering problems, which they
later named evolutionary strategy (Back et al. 1997). In 1966, (Fogel et al. 1966)
developed an evolutionary programming technique by representing the solution as
nite state machines and randomly mutating one of these machines. The above
innovative idea and method have evolved into a wider area that became known as
evolutionary algorithms (EAs) (Yang 2010).
In the early 1990s and in another great leap forward in the eld of CI-based
optimization algorithms, Marco Dorigo nished his Ph.D. thesis on optimization and
nature-inspired algorithms, in which he described his innovative work on ant colony
optimization (ACO) (Dorigo and Blum 2005). This search technique was inspired by
the swarm intelligence of social ants using the pheromone to trace the food sources.
Slightly later, in 1995, the particle swarm optimization (PSO) was proposed by an
1 Introduction 5
Perhaps, it could be benecial for studding reasons to categorize and classify the
CI-based optimization techniques, although depending on specic viewpoints many
classications are possible. While some of these classications are quite technical
and in some cases, even based upon vague characteristics, more general classi-
cations could help better understand the core principles behind such algorithms.
Intuitively, one can categorize the CI-based algorithms using the number of
searching agents. This characteristic would divide the algorithms into two major
categories: (1) population-based algorithms, and (2) single-point algorithms. The
algorithms working with a single agent are called trajectory methods. The population-
based algorithms, on the other hand, perform search processes using several agents
simultaneously.
As stated in previous sections, some CI-based algorithms can keep a record of
previously inspected arrays of decision variable in the search space. Such algo-
rithms are known as the memory using algorithms. On the contrary, the algorithms,
known as memory-less algorithms, do not memorize the previously encountered
locations in the search space. The methods of making use of memory in a CI-based
optimization algorithm can be further divided into short term and long-term
memory using algorithms. The former usually keeps track of recently performed
moves, visited solutions or, in general, decisions taken, while the latter is usually an
accumulation of synthetic parameters about the search. The use of memory is
nowadays recognized as one of the fundamental elements of a powerful CI-based
optimization algorithm (Blum and Roli 2003).
As mentioned earlier on, one of the challenges in complex optimization prob-
lems is to avoid trapping in local optima, which has been a major problem for
classical optimization techniques. To overcome this problem, some CI-based
optimization algorithms would modify their objective function during the opti-
mization process. Such optimization techniques are known as dynamic algorithms.
The alternative would be to keep the objective function as is during the optimization
process. This is the characteristic of static algorithms.
6 B. Zolghadr-Asli et al.
function space, yet as evidenced by numerous studies, such algorithms may and
will outperform one another in specic sets of optimization problems. In addition,
the algorithm developments are now focused on nding the best and efcient
algorithm for a specic set of optimization problems. Ultimately, instead of aiming
to design a perfect solver for all the problems, algorithms are developed to solve
most types of problems. As a result, during the last several decades, a considerable
number of novel nature-based optimization algorithms have been proposed. While
most of the algorithms hold considerable promise, a majority of them are still in
their infancy. For such algorithms to bloom and reach their full potential, they
should be applied to numerous optimization problems, so that not only they are
tested for their most suitable sets of optimization problems, but also adaptive
strategies are introduced to make them more suitable for wider sets of optimization
problems. For that, this book is specically aimed to introduce some of the potential
nature-based algorithms that can be useful in multidisciplinary studies including
those in aeronautic engineering, mechanical engineering, industrial engineering,
electrical and electronic engineering, chemical engineering, civil and environmental
engineering, computer science and engineering, applied mathematics, physics,
economy, biology, and social science, and particularly the postgraduate studies in
advanced subjects.
1.6 Conclusion
References
Back, T., Hammel, U., & Schwefel, H. P. (1997). Evolutionary computation: Comments on the
history and current state. IEEE Transactions on Evolutionary Computation, 1(1), 317.
Bezdek, J. C. (1992). On the relationship between neural networks, pattern recognition and
intelligence. International Journal of Approximate Reasoning, 6(2), 85107.
8 B. Zolghadr-Asli et al.
Blum, C., & Roli, A. (2003). Metaheuristics in combinatorial optimization: Overview and
conceptual comparison. ACM Computing Surveys (CSUR), 35(3), 268308.
Dorigo, M., & Blum, C. (2005). Ant colony optimization theory: A survey. Theoretical Computer
Science, 344(23), 243278.
Du, K. L., & Swamy, M. N. S. (2016). Search and optimization by metaheuristics. Switzerland:
Springer Publication.
Fogel, L. J., Owens, A. J., & Walsh, M. J. (1966). Intelligent decision making through a simulation
of evolution. Behavioral Science, 11(4), 253272.
Goldberg, D. E., & Holland, J. H. (1988). Genetic algorithms and machine learning. Machine
Learning, 3(2), 9599.
Poli, R., Kennedy, J., & Blackwell, T. (2007). Particle swarm optimization. Swarm Intelligence,
1(1), 3357.
Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE
Transactions on Evolutionary Computation, 1(1), 6782.
Xing, B., & Gao, W. J. (2014). Innovative computational intelligence: A rough guide to 134 clever
algorithms. Cham, Switzerland: Springer Publication.
Yang, X. S. (2010). Nature-inspired metaheuristic algorithms. Frome, UK: Luniver Press.
Chapter 2
Cat Swarm Optimization (CSO)
Algorithm
Abstract In this chapter, a brief literature review of the Cat Swarm Optimization
(CSO) algorithm is presented. Then the natural process, the basic CSO algorithm
iteration procedure, and the computational steps of the algorithm are detailed.
Finally, a pseudo code of CSO algorithm is also presented to demonstrate the
implementation of this optimization technique.
2.1 Introduction
Optimization algorithms based on the Swarm Intelligence (SI) were developed for
simulating the intelligent behavior of animals. In these modeling systems, a pop-
ulation of organisms such as ants, bees, birds, and sh are interacting with one
another and with their environment through sharing information, resulting in use of
their environment and resources. One of the more recent SI-based optimization
algorithms is the Cat Swarm Optimization (CSO) algorithm which is based on the
behavior of cats. Developed by Chu and Tsai (2007), the CSO algorithm and its
varieties have been implemented for different optimization problems. Different
variations of the algorithm have been developed by researchers. Tsai et al. (2008)
presented a parallel structure of the algorithm (i.e., parallel CSO or PCSO). They
further developed an enhanced version of their PCSO (EPCSO) by incorporating
the Taguchi method into the tracing mode process of the algorithm (Tsai et al.
2012). The binary version of CSO (BCSO) was developed by Shara et al. (2013)
and applied to a number of benchmark optimization problems and the zeroone
knapsack problem. The chaotic cat swarm algorithm (CCSA) was developed by
Yang et al. (2013a). Using different chaotic maps, the seeking mode step of the
algorithm was improved. Based on the concept of homotopy, Yang et al. (2013b),
proposed the homotopy-inspired cat swarm algorithm (HCSA) in order to improve
the search efciency. Lin et al. (2014a) proposed a method to improve CSO and
presented the Harmonious-CSO (HCSO). Lin et al. (2014b) introduced a modied
CSO (MCSO) algorithm capable of improving the search efciency within the
problem space. The basic CSO algorithm was also integrated with a local search
procedure as well as the feature selection of support vector machines (SVMs). This
method changed the concept of cat alert surroundings in the seeking mode of CSO
algorithm. By dynamically adjusting the mixture ratio (MR) parameter of the CSO
algorithm, Wang (2015) enhanced CSO algorithm with an adaptive parameter
control. A hybrid cat swarm optimization method was developed by Ojha and
Naidu (2015) through adding the invasive weed optimization (IWO) algorithm to
the tracing mode of the CSO algorithm.
Several other authors have used CSO algorithm in different elds of research on
optimization problems. Lin and Chien (2009) constructed the CSO algorithm + SVM
model for data classication through integrating cat swam optimization into the SVM
classier. Pradhan and Panda (2012) proposed a new multiobjective evolutionary
algorithm (MOEA) by extending CSO algorithm. The MOEA identied the non-
dominated solutions along the search process using the concept of Pareto dominance
and used an external archive for storing them. Xu and Hu (2012) presented a
CSO-based method for a resource-constrained project scheduling problem (RCPSP).
Saha et al. (2013) applied CSO algorithm to determine the optimal impulse response
coefcients of FIR low pass, high pass, bandpass, and band stop lters to meet the
respective ideal frequency response characteristics. So and Jenkins (2013) used CSO
for Innite Impulse Response (IIR) system identication on a few benchmarked IIR
plants. Kumar et al. (2014) optimized the placement and sizing of multiple distributed
generators using CSO. Mohamadeen et al. (2014) compared the binary CSO with the
binary PSO in selecting the best transformer tests that were utilized to classify
transformer health, and thus to improve the reliability of identifying the transformer
condition within the power system. Guo et al. (2015) proposed an improved cat
swarm optimization algorithm and redened some basic CSO concepts and opera-
tions according to the assembly sequence planning (ASP) characteristics. Bilgaiyan
et al. (2015) used the cat swarm-based multi-objective optimization approach to
schedule workflows in a cloud computing environment which showed better per-
formance, compared with the multi-objective particle swarm optimization (MOPSO)
2 Cat Swarm Optimization (CSO) Algorithm 11
technique. Amara et al. (2015) solved the problem of wind power system design
reliability optimization using CSO, under the performance and cost constraints.
Meziane et al. (2015) optimized the electric power distribution of a solar system by
determining the optimal topology among various alternatives using CSO. The results
showed a better performance than the binary CSO. Ram et al. (2015) studied a 9-ring
time-modulated concentric circular antenna array (TMCCAA) with isotropic ele-
ments based on CSO, for reduction of side lobe level and improvement in the
directivity. Crawford et al. (2016) solved a bi-objective set covering problem using
the binary cat swarm optimization algorithm. In order to achieve higher overall
system reliability for a large-scale primary distribution network, Majumder and Eldho
(2016) examined the effectiveness of CSO for groundwater management problems,
by coupling it with the analytic element method (AEM) and the reverse particle
tracking (RPT) approach. The AEM-CSO model was applied to a hypothetical
unconned aquifer considering two different objectives: maximization of the total
pumping of groundwater from the aquifer and minimization of the total pumping
costs. Mohapatra et al. (2016) used kernel ridge regression and a modied
CSO-based gene selection system for classication of microarray medical datasets.
Despite spending most of their time in resting, cats have high alertness and curiosity
about their surroundings and moving objects in their environment. This behavior
helps cats in nding preys and hunting them down. Compared to the time dedicated
to their resting, they spend too little time on chasing preys to conserve their energy.
Inspired by this hunting pattern, Chu and Tsai (2007) developed CSO with two
modes: seeking mode for when cats are resting and tracing mode for when they
are chasing their prey. In CSO, a population of cats are created and randomly
distributed in the M-dimensional solution space, with each cat representing a
solution. This population is divided into two subgroups. The cats in the rst sub-
group are resting and keeping an eye on their surroundings (i.e., seeking mode),
while the cats in the second subgroup start moving around and chasing their preys
(i.e., tracing mode). The mixture of these two modes helps CSO to move toward the
global solution in the M-dimensional solution space. Since the cats spend too little
time in the tracing mode, the number of the cats in the tracing subgroup should be
small. This number is dened by using the mixture ratio (MR) which has a small
value. After sorting the cats into these two modes, new positions and tness
functions will be available, from which the cat with the best solution will be saved
in the memory. These steps are repeated until the stopping criteria are satised.
12 M. Bahrami et al.
Following Chu and Tsai (2007), the computational procedures of CSO can be
described as follows:
Step 1: Create the initial population of cats and disperse them into the
M-dimensional solution space (Xi,d) and randomly assign each cat a
velocity in range of the maximum velocity value (ti,d).
Step 2: According to the value of MR, assign each cat a flag to sort them into the
seeking or tracing mode process.
Step 3: Evaluate the tness value of each cat and save the cat with the best
tness function. The position of the best cat (Xbest) represents the best
solution so far.
Step 4: Based on their flags, apply the cats into the seeking or tracing mode
process as described below.
Step 5: If the termination criteria are satised, terminate the process. Otherwise
repeat steps 2 through 5.
Table 2.1 lists the characteristics of the CSO and Fig. 2.1 illustrates the detailed
computational steps of the CSO algorithm.
During this mode the cat is resting while keeping an eye on its environment. In case
of sensing a prey or danger, the cat decides its next move. If the cat decides to
move, it does that slowly and cautiously. Just like while resting, in the seeking
mode the cat observes into the M-dimensional solution space in order to decide its
next move. In this situation, the cat is aware of its own situation, its environment,
and the choices it can make for its movement. These are represented in the CSO
algorithm by using four parameters: seeking memory pool (SMP), seeking range of
2 Cat Swarm Optimization (CSO) Algorithm 13
Start
No Yes
Is cat k in seeking mode?
Re-evaluate fitness functions and keep the cat with the best solution in the memory
No Yes
Are termination criteria satisfied?
End
Following Chu and Tsai (2007), the process of the seeking mode is described
below.
Step 1: Make SMP copies of each cati. If the value of SPC is true, SMP-1 copies
are made and the current position of the cat remains as one of the copies.
Step 2: For each copy, according to CDC calculate a new position by using
Eq. (2.1) (Majumder and Eldho 2016)
in which
Xc current position;
Xcn new position; and
R a random number, which varies between 0 and 1.
Step 3: Compute the tness values (FS) for new positions. If all FS values are
exactly equal, set the selecting probability to 1 for all candidate points.
Otherwise calculate the selecting probability of each candidate point by
using Eq. (2.2).
Step 4: Using the roulette wheel, randomly pick the point to move to from the
candidate points, and replace the position of cati.
jFSi FSb j
Pi ; where 0\i\j 2:2
jFSmax FSmin j
where
Pi probability of current candidate cati;
FSi tness value of the cati;
FSmax maximum value of tness function;
FSmin minimum value of tness function; and
FSb = FSmax for minimization problems and
FSb = FSmin for maximization problems.
The tracing mode simulates the cat chasing a prey. After nding a prey while
resting (seeking mode), the cat decides its movement speed and direction based on
2 Cat Swarm Optimization (CSO) Algorithm 15
the preys position and speed. In CSO, the velocity of cat k in dimension d is
given by
in which, vk;d = velocity of cat k in dimension d; Xbest;d = position of the cat with
the best solution; Xk;d = position of the catk; c1 = a constant; and r1 = a random
value in the range of [0,1]. Using this velocity, the cat moves in the M-dimensional
decision space and reports every new position it takes. If the velocity of the cat is
greater than the maximum velocity, its velocity is set to the maximum velocity. The
new position of each cat is calculated by
in which
Xk;d;new new position of cat k in dimension d; and
Xk;d;old current position of cat k in dimension d.
Chu and Tsai (2007) used six test functions to evaluate the CSO performance and
compared the results with the particle swarm optimization (PSO) algorithm and the
PSO with weighting factor (PSO-WF). According to the results CSO outperformed
PSO and PSO-WF in nding the global best solutions.
16 M. Bahrami et al.
Begin
Input parameters of the algorithm and the initial data
Calculate the fitness function values for all cats and sort them
For = 1:
If SPC = 1
Else
End if
End for i
End while
End
2.6 Conclusion
This chapter described cat swarm optimization (CSO) which is a new swarm-based
algorithm. CSO consists of two modes, seeking mode and tracing mode which
simulate the resting and hunting behaviors of cats. Each cat has a position in the
M-dimensional solution space. The cats movement toward the optimum solution is
based on a flag that sorts them into the seeking or tracing mode, the rst one being a
slow movement around their environment and the latter being a fast movement
toward the global best.
A literature review of CSO was presented, showing the success of the algorithm
for different optimization problems, along with different variations of the code
2 Cat Swarm Optimization (CSO) Algorithm 17
developed by other researchers. The flowchart of the CSO along with the pseudo
code was also presented in order to make different parts of the algorithm easier to
understand. These sources are a good reference point for further exploration of the
CSO algorithm.
References
Amara, M., Bouanane, A., Meziane, R., & Zeblah, A. (2015). Hybrid wind gas reliability
optimization using cat swarm approach under performance and cost constraints. 3rd
International Renewable and Sustainable Energy Conference (IRSEC), Marrakech and
Ouarzazate, Morocco, 1013 December.
Bilgaiyan, S., Sagnika, S., & Das, M. (2015). A multi-objective cat swarm optimization algorithm
for workflow scheduling in cloud computing environment. Intelligent Computing,
Communication and Devices (pp. 7384). New Delhi, India: Springer.
Chu, S. C., & Tsai, P. W. (2007). Computational intelligence based on the behavior of cats.
International Journal of Innovative Computing, Information and Control, 3(1), 163173.
Crawford, B., Soto, R., Caballero, H., Olgun, E., & Misra, S. (2016). Solving biobjective set
covering problem using binary cat swarm optimization algorithm. The 16th International
Conference on Computational Science and Its Applications, Beijing, China, 47 July.
Guo, J., Sun, Z., Tang, H., Yin, L., & Zhang, Z. (2015). Improved cat swarm optimization
algorithm for assembly sequence planning. Open Automation and Control Systems Journal, 7,
792799.
Kumar, D., Samantaray, S. R., Kamwa, I., & Sahoo, N. C. (2014). Reliability-constrained based
optimal placement and sizing of multiple distributed generators in power distribution network
using cat swarm optimization. Electric Power Components and Systems, 42(2), 149164.
Lin, K. C., & Chien, H. Y. (2009). CSO-based feature selection and parameter optimization for
support vector machine. Joint Conferences on Pervasive Computing (JCPC), Taipei, Taiwan,
35 December.
Lin, K. C., Zhang, K. Y., & Hung, J. C. (2014a). Feature selection of support vector machine
based on harmonious cat swarm optimization. Ubi-Media Computing and Workshops
(UMEDIA), Ulaanbaatar, Mongolia, 1214 July.
Lin, K. C., Huang, Y. H., Hung, J. C., & Lin, Y. T. (2014b). Modied cat swarm optimization
algorithm for feature selection of support vector machines. Frontier and Innovation in Future
Computing and Communications, 329336.
Majumder, P., & Eldho, T. I. (2016). A new groundwater management model by coupling analytic
element method and reverse particle tracking with cat swarm optimization. Water Resources
Management, 30(6), 19531972.
Meziane, R., Boufala, S., Amara, M., & Hamzi, A. (2015). Cat swarm algorithm constructive
method for hybrid solar gas power system reconguration. 3rd International Renewable and
Sustainable Energy Conference (IRSEC), Marrakech and Ouarzazate, Morocco, 1013
December.
Mohamadeen, K. I., Sharkawy, R. M., & Salama, M. M. (2014). Binary cat swarm optimization
versus binary particle swarm optimization for transformer health index determination. 2nd
International Conference on Engineering and Technology, Cairo, Egypt, 1920 April.
Mohapatra, P., Chakravarty, S., & Dash, P. K. (2016). Microarray medical data classication using
kernel ridge regression and modied cat swarm optimization based gene selection system.
Swarm and Evolutionary Computation, 28, 144160.
Naidu, Y. R., & Ojha, A. K. (2015). A hybrid version of invasive weed optimization with
quadratic approximation. Soft Computing, 19(12), 35813598.
18 M. Bahrami et al.
Pradhan, P. M., & Panda, G. (2012). Solving multiobjective problems using cat swarm optimization.
Expert Systems with Applications, 39(3), 29562964.
Ram, G., Mandal, D., Kar, R., & Ghoshal, S. P. (2015). Cat swarm optimization as applied to
time-modulated concentric circular antenna array: Analysis and comparison with other
stochastic optimization methods. IEEE Transactions on Antennas and Propagation, 63(9),
41804183.
Saha, S. K., Ghoshal, S. P., Kar, R., & Mandal, D. (2013). Cat swarm optimization algorithm for
optimal linear phase FIR lter design. ISA Transactions, 52(6), 781794.
Shara, Y., Khanesar, M. A., & Teshnehlab, M. (2013). Discrete binary cat swarm optimization
algorithm. In Computer, Control & Communication (IC4). 3rd IEEE International Conference
on Computer, Control & Communication (IC4), Karachi, Pakistan, 2526 September.
So, J., & Jenkins, W. K. (2013). Comparison of cat swarm optimization with particle swarm
optimization for IIR system identication. Asilomar Conference on Signals, Systems and
Computers, Pacic Grove, CA, USA, 69 November.
Tsai, P. W., Pan, J. S., Chen, S. M., Liao, B. Y., & Hao, S. P. (2008). Parallel cat swarm
optimization. International Conference on Machine Learning and Cybernetics, Kunming,
China, 1215 July.
Tsai, P. W., Pan, J. S., Chen, S. M., & Liao, B. Y. (2012). Enhanced parallel cat swarm
optimization based on the Taguchi method. Expert Systems with Applications, 39(7),
63096319.
Wang, J. (2015). A new cat swarm optimization with adaptive parameter control. Genetic and
Evolutionary Computing, 6978.
Xu, L., & Hu, W. B. (2012). Cat swarm optimization-based schemes for resource-constrained
project scheduling. Applied Mechanics and Materials, 220, 251258.
Yang, S. D., Yi, Y. L., & Shan, Z. Y. (2013a). Chaotic cat swarm algorithms for global numerical
optimization. Advanced Materials Research, 602, 17821786.
Yang, S. D., Yi, Y. L., & Lu, Y. P. (2013b). Homotopy-inspired cat swarm algorithm for global
optimization. Advanced Materials Research, 602, 17931797.
Chapter 3
League Championship Algorithm (LCA)
3.1 Introduction
The league championship algorithm (LCA) is one of the new evolutionary algo-
rithms (EA) for nding global optimum in a continuous search space rst proposed
by Kashan (2009, 2011) developed the basic LCA for solving a constrained opti-
mization benchmark problem. The results demonstrated that LCA is a very com-
petitive algorithm for constrained optimization problems. Kashan et al. (2012)
modied the basic LCA by using two halves analysis, instead of the post-match
SWOT analysis. The performance of the more realistic modied LCA (RLCA) was
compared with those of the particle swarm optimization (PSO) and the basic LCA
in nding the global solutions of different benchmark problems. The results indi-
cated the better performance of RLCA in terms of the quality of nal solutions and
the convergence speed. LCA has been applied to different optimization problems.
Lenin et al. (2013) utilized LCA for solving a multi-objective dispatch problem.
Abdulhamid and Latiff (2014) used LCA based on job scheduling scheme for
optimization of infrastructure as a service cloud. Sajadi et al. (2014) applied LCA
for scheduling context. The scheduling considered in a permutation flow-shop
system with makespan criterion. Abdulhamid et al. (2015a, b, c) used LCA to
minimize makespan time scheduled tasks in IaaS cloud. Abdulhamid et al. (2015a,
b, c) proposed a job scheduling algorithm based on the enhanced LCA in order to
optimize the infrastructures as a service cloud. The performance of the proposed
algorithm was compared with three other algorithms. The results proved that the
LCA scheduling algorithm performed better than other algorithms. Xu et al. (2015)
presented an improved league championship algorithm with free search (LCAFS),
in which the novel match schedule was implemented to improve the capability of
competition for teams and by introducing the free search operation, the diversity of
league was also improved. The global search performance and convergence speed
of LCAFS were compared with those of the basic LCA, PSO, and genetic algorithm
(GA) in solving some benchmark functions. The results demonstrated that LCAFS
was able to describe complex relationships between key influence factors and
performance indexes. Jalili et al. (2016) introduced a new approach for optimizing
truss design based on LCA, which considered the concept of tie. The performance
of the proposed algorithm was evaluated by using ve typical truss design examples
with different constraints. The results illustrated the effectiveness and robustness of
the proposed algorithm.
The basic idea of LCA was inspired by the concept of league championship in sport
competitions. Following terms related to the league, team and its structure are
commonly used in LCA.
League: league means a group of sport teams that are organized to compete
with each other in a certain type of sport. A league championship can be held in
different ways. For example, the number of games that each team should play with
other teams may vary. At the end of the league championship, the champion is
determined based on the win, loss and tie records during the leagues competition
with other teams.
Formation: formation of a team refers to the specic structure of the team when
playing with other teams, such as the positions of players, and the rule of each player
during match. For any sport teams, coaches arrange their teams based on their
players abilities to achieve the best available formation to play with other teams.
Match analysis: match analysis refers to the examination of behavioral events
occurring during a match. The main goal of match analysis after determining the
performance of a team is to recognize the strengths and weaknesses and to improve.
The important part of the match analysis process is to send the feedback of last
matches (their own match and the opponents match) to players. The feedback
3 League Championship Algorithm (LCA) 21
should be given to the player, pre-match and post-match to build up team for next
match. One of these analyses is the strength/weakness/opportunity/threat (SWOT)
analysis which links the external (opportunities and threats) and internal (strengths
and weaknesses) factors of the teams performance. Identication of SWOTs is
necessary because next planning step is based on the results of the SWOT analysis
to achieve the main objective. The SWOT analysis evaluates the interrelationships
between the internal and external factors of match in four basic categories:
S/T matches illustrate the strengths of a team and the threats of competitors. The
team should use its strengths to defuse threats.
S/O matches illustrate the strengths and opportunities of a team. The team should
use its strengths to take opportunities.
W/T matches illustrate the weaknesses and threats of a team. The team should
try to minimize its weaknesses and defuse threats.
W/O matches show the weaknesses coupled with opportunities of a team. The
team should attempt to overcome weaknesses by use of opportunities.
The SWOT analysis provides a structure for conducting gap analysis. A gap
refers to the space between the place where we are and the place where we want to
be. The identication process of a teams gap contains an in-depth analysis of the
factors that express the current condition of the team and subsequently help to make
a plan for improvement of the team.
In the optimization process of LCA, a set of L (an even number) solutions are
rst created randomly to build initial population. Then, they evolve gradually the
composition of the population in sequential iterations. In LCA, league refers to
population; formation of a team stands for solution; and week refers to iteration. So,
team i denotes the ith solution of the population. A tness value is then calculated
for each team based on the teams adaption to the objectives (determined by the
concepts of player strength and teams formation). In LCA, new solutions are
generated for next week by applying operators to each team based on the results of
match analysis which are used by coaches to improve their teams arrangements.
An evolutionary algorithm (EA) is a population-based one that uses the Darwins
evolution theory as selection mechanism. Based on a pseudo code of EA and
according to the selection process of LCA (greedy selection), in which the current
teams formation is replaced by the best teams formation, LCA can be classied as
an EA group of the population-based algorithms. LCA terminates after a certain
number of seasons (S), each of which is composed of L 1 weeks. Note that the
number of iterations in LCA is equal to SL 1.
LCA models an articial championship during the optimization process of the
algorithm based on some idealized rule that can be expressed as follows:
(1) The team with better playing strength (ability of the team to defeat competitors)
has more chances to win the game.
(2) The weaker team can win the game but its chance to win the game is very low
(the playing strength does not determine the nal outcome of the game exactly).
(3) The sum of the wins probabilities of both teams that participate in a match is
equal to one.
(4) The outcome of the game only can be win or loss (tie is not acceptable as the
outcome of the game in the basic version of LCA).
(5) When teams i and j compete with each other and eventually team i wins the
match, any strength helps team i to win and dual weakness causes team j to lose
the match (weakness is a lack of specic strength).
(6) The teams just focus on their forthcoming match without consideration of other
future matches and the formation of team arranged only by previous week
results.
Figure 3.1 shows the flowchart of LCA, which illustrates the optimization
process of the basic LCA. As shown in Fig. 3.1, rst of all a representation for
individuals must be chosen. Solutions (teams formation) are represented with n
decision variables of real numbers. Each element of the solutions depends on one of
the teams players and shows the corresponding values of the variables with the aim
of optimization. Changes in each value can be the effect of changes in the
responsibility of the corresponding player. f x1 ; x2 ; . . .; xn denotes an n variable
function to be minimized during the optimization running of LCA over a decision
space (a subset of Rn ). The solution of team i at week t can be represented by
Xit xti1 ; xti2 ; . . .; xtin and the value of its tness function (player strength) is ff xti .
Bt1
i bt1i1 ; bi2 ; . . .; bin
t1 t1
i denote the best formation of team i before
and ff Bt1
3 League Championship Algorithm (LCA) 23
week t and its tness function, respectively. The greedy selection in LCA can be
made between ff xti and ff Bt1
i . The modules of LCA, generation of league
schedule, determination of the winner or loser, and setup of new formation are
detailed in the following section.
24 H. Rezaei et al.
The common aspect of different sport leagues is the structure, in which teams can
compete with each other in a nonrandom schedule, named season. Therefore, the
rst and the most important step in LCA, is to determine the match schedule in each
season. A single-round robin schedule can be applied in LCA to determine the
teams schedule, in which each team competes against other teams just once in a
certain season. In a championship containing L teams with the single-round robin
schedule rule, there are LL 1=2 matches in a certain season. In each of L 1
weeks, L=2 matches will be held in parallel (if L is odd, in each week L 1=2
matches will be held and one team has to rest).
The procedure of scheduling of the algorithm can be illustrated by a simple
example of league championship of 8 teams. The teams have named from a to h.
In the rst week, the competitors are identied randomly. Figure 3.2a shows
the competitors in the rst week. For example, team a competes with team d and
team b competes with team g. In the second week, in order to identify the pairs of
competitors, one of the teams (team a) is xed in its own place and all other teams
turn round clockwise. Figure 3.2b indicates the procedure of identifying the pairs of
competitors for week 2. This process continues until the last week (week 7) shown
Fig. 3.2c. In LCA, the single-round robin tournament is applied for scheduling
L teams in SL 1 weeks.
During the league championship, teams compete with each other in every week.
The outcome of each match can be loss, tie, or win. The scoring rules for the
outcome of the matches can be different for different sports. For instance, in soccer
the winner gets three, and the loser gets a zero score. By the end of the match, both
teams get one if the outcome is tie. According to the idealized rule 1, the chance of
a stronger team to win the match is higher than its competitor, but occasionally a
weaker team may win the match. Therefore, the outcome of the match is associated
with different reasons. The most important one is the playing strengths of the teams.
So we can consider a linear relationship between the playing strengths and the
outcome of the match (idealized rule 2).
f xti ^f ptj
3:1
f xt ^f pti
j
where xti and xtj formation of teams i and j at week t; f xti and f xtj playing
strength of teams i and j at week t; ^f ideal reference point; pti chance of team
i to defeat team j at week t; and ptj chance of team j to defeat team i at week t.
Because the chances of both teams to win the match are evaluated based on the
specic point, the ratio of distance is identied as the teams winning portion.
According to idealized rule 3, the relationship of the chances of teams i and j at
week t can be expressed as follows:
Based on Eqs. (3.1) and (3.2), the chance of team i to defeat team j at week t is
given by (Kashan 2014):
f xtj ^f
pti 3:3
f xti f xj 2^f
t
In LCA, in order to specify the winner of the match, a random number between
[0,1] is generated randomly. If the generated number is equal to or less than pti , team
i defeats team j at week t. Otherwise, team j defeats team i at week t.
Before applying any strategy to team i, in order to change the formation of team i at
next week, coaches should identify strengths and weaknesses of the team and
players (individuals). Based on these strengths and weaknesses, coaches determine
the formation of the team in next week to enhance the performance of the team. An
articial match analysis can be performed to specify the opportunities and threats.
Strengths and weaknesses are internal factors while opportunities and threats are
external factors. In LCA, the internal factors are evaluated based on the teams
26 H. Rezaei et al.
performance at the last week (week t), while evaluating the external factors is based
on the opponents performance at week t. The articial match analysis helps prepare
team i for next week (week t + 1). In the modeling process, if team i wins (loses)
the match at week t, it is assumed that the success (failure) is directly related to the
strengths (weaknesses) of team i or weaknesses (strengths) of its opponent team
j (idealized rule 5). The procedure of modeling and evaluating the articial match
analysis for team i at week t is displayed in Fig. 3.3. The left side in Fig. 3.3 shows
the evaluation of hypothetical internal factors and the right side shows the way of
evaluating the external factors.
According to the results of the articial match analysis applied to team i in order
to determine its performance, the coach should take some possible actions to
improve the teams performance. The possible actions (SWOT analysis) are shown
in Table 3.2. The SWOT analysis is adjusted based on idealized rule 6. Table 3.2
shows different strategies (S/T, S/O, W/T, and W/O) that can be adopted for
team i in different situations. For instances, if team i has won the last match and
team l has lost its match at the last week, it is reasonable for team i to focus on
strengths which give it more chance to win team l at next match. Therefore,
adopting the S/O strategy for team i is efcient. Table 3.2 also displays,
3 League Championship Algorithm (LCA) 27
Table 3.2 Hypothetical SWOT analysis derived from the articial match analysis
Adopt S/T strategy Adopt S/O strategy Adopt W/T strategy Adopt W/O strategy
Team i has won Team i has won Team i has lost Team i has lost
Team l has won Team l has lost Team l has won Team l has lost
Focusing on Focusing on Focusing on focusing on
S Own strengths or Own strengths or
weaknesses of weaknesses of
team j team j
W Own weaknesses or Own weaknesses or
strengths of team j strengths of team j
O Weaknesses of team Weaknesses of team
l or strengths of l or strengths of
team k team k
T Strengths of team Strengths of team
l or weaknesses of l or weaknesses of
team k team k
in a metaphorical way, the SWOT analysis matrix which is used for planning in the
future matches.
The aforementioned analysis must be performed by all participants at week t to
plan for next match and choose a suitable formation for upcoming match. After
adopting a suitable strategy for team i based on the SWOT matrix, all teams should
ll their gaps. For instance, assume that in a soccer match team i has lost the match
at week t to team j and the results of the match analysis process have specied that
the type of defensive state (man-to-man defensive state) is the reason of loss
Therefore, a gap exists between the current sensitive defensive state and the state
which ensures a man-to-man pressure defense at week t 1.
According to the league schedule, team l is the competitor of team i (i = 1, 2, , L)
at week t 1; team j is the competitor of team i at week t; and team k is the competitor
of team l at week t. As aforementioned, Xit ; Xjt and Xkt , respectively, denote the
formations of teams i, j, and k at week t. Xkt Xit denes the gap between playing
styles of teams i and k, which highlights the strengths of team k. This case applies
when team i wants to play with team l at week t + 1 and team k wins team l at week t
by Xkt s formation. Therefore, if team i uses the playing style of team k at week t (Xkt ) to
compete with team l at week t 1, it is highly possible for team i to win team l at
week t 1. Similarly, Xit Xkt is used if we want to focus on the weaknesses
of team k. In this case, team i should not use the playing style of team k at week t
against team l. Xit Xjt and Xjt Xit also can be dened. Due to the principle that
each team should play with the best formation that is selected from playing experi-
ence up to now and by considering the results of the articial match analysis in last
week, new formation of team i at week t 1 Xit 1 xti1 1 ; xi2t1
; . . .; xtin 1 can be
set up by one of the following equations:
If teams i and l have won the match at week t, the new formation of team i will
be generated based on the S/T strategy:
28 H. Rezaei et al.
S=T strategy : xtim 1 btim ytim x1 r1im xtim xtkm x1 r2im xtim xtjm
3:4
8m 1; 2; . . .; n
If team i has won and team l has lost at week t, the new formation of team i will
be generated based on the S/O strategy:
S=O strategy : xtim 1 btim ytim x2 r1im xtKm xtim x1 r2im xtim xtjm
3:5
8m 1; 2; . . .; n
If team i has lost and team l has won at week t, the new formation of team i will
be generated based on the W/T strategy:
W=T strategy : xtim 1 btim ytim x1 r1im xtim xtkm x2 r2im xtjm xtim
3:6
8m 1; 2; . . .; n
If teams i and l have lost the match at week t, the new formation of team i will be
generated based on the W/O strategy:
t1
W=O strategy : xim btim ytim x2 r1im xtkm xtim x2 r2im xtjm xtim
3:7
8m 1; 2; . . .; n
where m number of team members; r1im and r2im random numbers between
[0,1]; x1 and x2 coefcients used to scale the contribution of approach or retreat
components; and ytim binary variable that species whether or not the mth player
must change in the new formation (only ytim 1 allows to change). Note that
different signs in the parentheses are the consequence of acceleration towards the
winner or recess from the loser. Yit yti1 ; yti2 ; . . .; ytin denotes a binary change
variable. The summation of the changes needed for next match yti 1 is equal to
qti . Changes in all aspects of the team (players and styles) by coaches are not
common (just a few changes in the team can be required). In order to calculate the
number of changes in the teams formation for next match, a truncated geometric
distribution is applied in LCA. The truncated geometric distribution lets LCA to
control the number of changes with emphasis on the smaller rates of changes in Bti .
The truncated geometric distribution can be expressed as follows:
" #
ln1 1 pc nq0 1 r
qi
t
q0 1 : qti 2 fq0 ; q0 1; . . .; q0 ng 3:8
ln1 pc
For m=1: L ( S 1)
Calculate the chance of each team to defeat its competitors in next match ( p it )
If Rn p it
Else
Calculate the number of changes in teams best formation ( Bit ) for next match
qit players are selected randomly from Bit and changed by the SWOT matrix
End if
End for m
End
30 H. Rezaei et al.
3.8 Conclusions
This chapter described the league championship algorithm (LCA), which stemmed
from the concept of league championship in sport. This chapter also presented a
literature review of LCA, and its algorithmic fundamental and pseudo code.
References
Abdulhamid, S. M., & Latiff, S. A. (2014). League championship algorithm (LCA) based job
scheduling scheme for infrastructure as a service cloud. 5th International Graduate Conference
on Engineering, Science and Humanities, UTM Postgraduate Student Societies, Johor,
Malaysia, 1921 August.
Abdulhamid, S. M., Latiff, M. S., & Abdullahi, M. (2015a). Job scheduling technique for
infrastructure as a service cloud using an enhanced championship algorithm. 2nd International
Conference on Advanced Data and Information Engineering, Lecture Notes in Electrical
Engineering, Bali, Indonesia, 2526 April.
Abdulhamid, S. M., Latiff, M. S., & Idris, I. (2015b). Tasks scheduling technique using league
championship algorithm for makespan minimization in IaaS cloud. ARPN Journal of
Engineering and Applied Sciences., 9(12), 25282533.
Abdulhamid, S. M., Lattif, M. S. A., Madni, S. H. H., & Oluwafemi, O. (2015c). A survey of
league championship algorithm: prospects and challenges. Indian Journal of Science and
Technology.
Jalili, S., Kashan, A. H., & Husseinzadeh, Y. (2016). League championship algorithms for
optimization design of pin-jointed structures. Journal of Computing in Civil Engineering.
doi:10.1061/(ASCE)CP.1943-5487.0000617
Kashan, A. H. (2009). League championship algorithm: A new algorithm for numerical function
optimization. International Conference on Soft Computing and Pattern Recognition, IEEE
Computer Society, Malacca, Malaysia, 47 December.
Kashan, A. H. (2011). An efcient algorithm for constrained global optimization and application to
mechanical engineering design: League championship algorithm (LCA). Computer-Aided
Design, 43(2011), 17691792.
Kashan, A. H. (2014). League championship algorithm (LCA): an algorithm for global
optimization inspired by sport championships. Applied Soft Computing, 16(2014), 171200.
Kashn, A. H., Karimiyan, S., Karimiyan, M., & Kashan, M. H. (2012). A modied League
Championship Algorithm for numerical function optimization via articial modeling of the
between two halves analysis. The 6th International Conference on Soft Computing and
Intelligent Systems, and the 13th International Symposium on Advanced Intelligence Systems,
University of Aizu, Kobe, Japan, 2024 November.
Lenin, K., Reddy, B. R., & Kalavati, M. S. (2013). League championship algorithm (LCA) for
solving optimal reactive power dispatch problem. International Journal of Computer and
Information Technologies., 1(3), 254272.
Sajadi, S. M., Kashan, A. H., & Khaledan, S. (2014). A new approach for permutation flow-shop
scheduling problem using league championship algorithm. In Proceedings of CIE44 and
IMSS14, 2014.
Xu, W., Wang, R., & Yang, J. (2015). An improved league championship algorithm with free
search and its application on production scheduling. Journal of Intelligent Manufacturing.
doi:10.1007/s10845-015-1099-4
Chapter 4
Anarchic Society Optimization
(ASO) Algorithm
4.1 Introduction
Anarchic is derived from the Greek word anarkos meaning no boss and Anarchia
means lack of government. The term Anarchism refers to a political opinion and
movement believing that any political power and authority are obscene and
unnecessary and that any government should be overthrown and replaced with free
associations and volunteer groups. Because the Anarchism believes that the gov-
ernment causes a nations social miseries. Overall, Anarchists are opposed to any
government authority and consider the democracy as the Tyranny of the majority.
They emphasize individual freedom. This emphasis results in opposition to any
external authority, especially government, which is construed as a barrier for free
individual growth and excellence.
The Anarchist thought is based on a variety of principles including individual-
ism, humanism, libertarians, lawlessness, anarchic, and absolute freedom.
According to these principles, Anarchism opposes any religious or non-religious
social institutions and considers the human as an absolute free creature. At the heart
of Anarchism, there is a reckless utopia orientation, believing in natural wellness, or
at least mankind potential wellness (Nettlau 2000).
Anarchism contains a variety of branches and anarchism theorists follow one of
them. According to the view of the Communist Anarchism, human is inherently
social, and a society and its individuals benet each other. Human and society
conformity is possible when negating the powerful social institutions especially
government. The Syndicate-oriented Anarchism looks for the salvation in economic
strife not in the political strife of proletariat. Followers of this faction organize labor
unions and syndicates to quarrel with the power structure. According to their point
of view, the current government will be eventually annihilated as a result of a
revolution and the new economic order will be formed based on syndicates.
Nowadays, such thoughts have become a mass movement in some South American
and European countries. The followers of Individualist Anarchism believe that the
human has the right to do whatever he/she will and whatever deprive him/her from
freedom must be destroyed (Woodcook 2004).
In general, there are three major insufciencies that Anarchism suffers from.
First, Anarchism has the unrealistic goal that is related to collapse of government
and all other forms of political authority, while economic and social development
has been always accompanied with government roles. Second, the Anarchism
objects to powerful institutions like parties that can play an effective and efcient
role in development of a society. Third, Anarchism lacks a series of distinct and
coherent political beliefs, which causes many disputes.
During the recent centuries, many countries have undergone certain types of
anarchy, including France (The revolution period), Jamaica (1720), Russia (during
Civil Wars), Spain (1936), Albania (1997), and Somali (19912006). According to
the view of Anarchists, a society can be managed without the need of the central
government and only based on individuals or volunteer groups. In this case, indi-
viduals or groups will be able to determine the right direction without being ordered
by a ruling power and only based on their or others previous experiences.
Although this viewpoint has not been prosperous in stable management of a society
so far, it can be used as a basis for developing optimization methods in engineering
sciences.
In this method, each individual selects his/her next position according to the
personal experiences, group or syndicate experiences, and historical experiences.
Finally, after a number of moves, at least one of the group members would reach a
near-optimal answer. Employing this algorithm causes the total decision space to be
fully searched and prevents being stuck at local optima.
4 Anarchic Society Optimization (ASO) Algorithm 33
4.2 Formulation
The detailed procedures of the ASO algorithm are shown in Fig. 4.1.
For a solution space S, f : S ! R is a function that should be minimized in
S. Consider that a community, consisting of N member(s), is being searched in an
Start
End
Generating the initial
population Yes
Sorting of Solutions
Combining movement policies
unknown territory (the solution space) for discovering the best place to live (i.e., the
overall minimum of f on S). Xi(k) presents the position of the ith member in the kth
iteration; X*(k) denotes the best position experienced by all members in the kth
iteration; and Gbest is the best position experienced by the ith member during the
rst k iterations.
As shown in Fig. 4.1, rst, a number of community members are selected randomly
within the solution space. Then, the tness of every member is determined.
According to the calculated tness value and comparison with X*(k), Pbest
i and Gbest ,
the movement policy and a new position of the member will be determined. After
an adequate number of iterations, at least one of the members will reach the optimal
position. Table 4.1 lists the characteristics of the ASO.
The rst movement policy for the ith member in the kth iteration [MPicurrent(k)] is
adopted based on the current position. The Fickleness Index FIi(k) for member i in
iteration k is used to select this movement policy (Ahmadi-Javid 2011). This index
measures the satisfaction of the current position of the ith member compared with
other members positions. If the objective function is positive in S, the Fickleness
Index can be expressed as one of the following forms (Ahmadi-Javid 2011):
f X k f Pi k
FIi k 1 ai 1 ai 4:1
f X i k f Xi k
f G k f Pi k
FIi k 1 ai 1 ai 4:2
f X i k f Xi k
The second movement policy for the ith member in the kth iteration [MPisociety(k)] is
adopted based on the positions of other members. Although each member should
move in the direction of Gbest logically, the movement of the member is not pre-
dictable due to the anarchist nature of the member and may move toward another
community member. Therefore, the external irregularity index EIi(k) for the ith
member in the kth iteration can be calculated by (Ahmadi-Javid 2011):
moving towards Gbest 0 EIi (k) threshold
MPsociety (k) 4:6
i
moving towards a random Xi (k) thereshold EIi (k) 1
The closer the threshold is to zero, the more illogical the member movements
would be. As the threshold converges to one, the members would behave more
logically.
The third movement policy for the ith member in the kth iteration [MPipast(k)] is
adopted based on the previous positions of the individual member. In order to select
this movement policy, the position of the ith member in the kth iteration is com-
pared to Pbest best
i . If the position of the member is close to Pi , the member behaves
more logically. Otherwise, the member shows illogical behavior. To determine the
movement policy based on previous positions, the internal irregularity index
IIi(k) for the ith member in the kth iteration is dened as follows:
The closer the threshold is to zero, the more illogical the member movements
would be. As the threshold converges to one, the members would behave more
logically.
In order to select the nal movement policy, the three policies discussed above are
combined with each other. After the movement policies are calculated, each
member should combine these policies with a method and move toward a new
position. One of the simplest methods is to select the policy with the best answer.
The next alternative is to combine the movement policies with each other
sequentially which is referred to as the sequential combination rule. The crossover
method can either be used for continuous problems coded as chromosomes, or used
in a sequential way to combine the movement policies. Ahmadi Javid (2011)
demonstrated that the ASO algorithm is a more general state of the PSO algorithm.
4 Anarchic Society Optimization (ASO) Algorithm 37
4.9 Conclusion
The evolutionary and heuristic algorithms are widely used for solving optimization
problems in the engineering and science elds. In the last few decades, various
algorithms have been introduced mostly based on insects, animal lives, and bio-
logical concepts. In this chapter, the Anarchic Society Optimization algorithm, rst
introduced by Ahmadi-Javid (2011), was described.
This algorithm has been investigated for solving electrical and industrial engi-
neering problems (e.g., power) and optimizing water networks and reservoir
operation. The ASO algorithm has some advantages such as its relatively simple
structure and its potential to achieve better performance. In denition of the ASO
algorithm three indices are used. It seems that changing the use of these indices to
reach a new position for each member or even dening a new index can lead to
superior convergence.
The ASO algorithm was adopted from the life of human communities. Since
human societies are more complicated than animal or insect groupings, it is
expected that ASO as the rst algorithm based on human societies is a turning point
of the performance and capabilities of optimization algorithms.
References
5.1 Introduction
All the 9000 existing birds in the world have the same reproduction way: egg
laying. None of the birds give birth. They lay eggs and raise the baby birds outside
their bodies. The larger the eggs are, the less the probability is for a female bird to
have more than one egg in her body simultaneously, because on one hand, bigger
eggs make flying tough and require more energy to fly. On the other hand, eggs are
a rich source of protein for the predators, so it is necessary for birds to nd a secure
place for egg laying and hatching their eggs. Finding a secure place for egg laying,
hatching, and raising the birds until being independent of their parents is of vital
importance, which is intellectually solved by birds. They use an artistry and a
complicated engineering to do this. The variety of nest-making and the architecture
of the nests are absolutely stunning. Most birds make their nests segregated and
hidden to prevent being detected by the predators. Some of them hide their nests so
skillfully that human beings are not able to recognize and see them.
There are some birds that detached themselves from the challenge of
nest-making and use a cunning way to raise their families. These birds are called
Brood Parasites that never build a nest. They lay their eggs in other species nests
and wait for them to take care of their young. Cuckoo is the most famous Brood
Parasite that is an expert in deceiving cruelly. The strategy of cuckoos includes
their speed, being stealthy and surprising. A mother cuckoo destroys the hosts eggs
and lay her own eggs among others in the nest and flies away from the location fast
and lays caring on the host bird. This process is hardly more than 10 s. Cuckoos
42 S. Jafari et al.
make other nests parasitized by their eggs and mimic the color and the patterns of
existing eggs carefully so that new eggs in the nest look like the previous eggs.
Each female cuckoo is specialized on specic species of birds. This is one of the
main secrets of nature about how female cuckoos imitate a special kind of other
birds eggs accurately. Some of the birds recognize cuckoos eggs and sometimes
they even throw the eggs out of the nest. Some of them completely leave the nest
and build a new one. In fact, cuckoos continuously improve their mimicry from the
eggs in the target nests and host birds learn new ways to recognize the strange eggs
as well. This struggle for survival among different birds and cuckoos is a constant
and continuous process.
A suitable habitat for cuckoos should provide food sources (specially insects)
and locations for laying eggs, so the main necessity of brood parasites will be the
habitats of the host species. Cuckoos are found in a variety of places. Most of the
species are found in forests, especially evergreen rain forests. Some of the cuckoo
species select a wider range of places to live, which can even include dry areas and
deserts. Immigrant species select vast environments to make maximum misuse of
the host birds. Most of the cuckoo species are non-immigrant but there are several
ones that have seasonal immigration as well. Some of the species have partial
immigrations in their habitat range. Some species (e.g., channel-billed cuckoos)
have diurnal immigration, while others (e.g., yellow-billed cuckoo) have nocturnal
immigration. For those cuckoos that live in mountainous areas, availability of the
foods necessitates to immigrate to tropical areas. Long-tailed Koel cuckoos which
live and lay eggs in New Zealand, immigrate to Micrones, Melanesia, and
Polynesia in winters. Yellow-billed species and black-billed species that breed in
North America, pass the Caribbean Sea in a non-stop 4000-km flight. Other
long-distance immigrations include lesser cuckoos that fly over Indian Ocean from
India to Kenya (about 3000 km). Ten types of cuckoos perform polarized
intra-continental migration in a way that they spend non-breeding seasons in
tropical areas of the continent and then immigrate to dry and desert areas for egg
laying.
About 52 old species and 3 new species are brood parasite. They lay their eggs
in other birds nests. These species are obligate brood parasites since this is the only
way to their reproduction. Cuckoo eggs hatch earlier than their hosts eggs. In most
cases, a cuckoo chick throws the hosts eggs or the hosts chicks out of the nest.
This is completely instinctive and the cuckoo chick has no time to learn it.
A cuckoo chick makes the host provide a food suitable to its growth and beg for
food again and again. The cuckoo chick announces its need for food by an open
mouth because an open mouth to the mother is an indication for hunger.
Female cuckoos are skillful and expert in producing eggs similar to their hosts
eggs due to natural selection. Some birds recognize the eggs and throw them out
though. Parasite cuckoos are divided into different categories and each category is
expert in a special host. It is proved that the cuckoos in one category are genetically
different from those in another category. Specialization in imitating the hosts eggs
gradually improves and evolves.
5 Cuckoo Optimization Algorithm (COA) 43
Figure 5.1 shows the flowchart of COA. Like other evolutionary algorithms, COA
starts with an initial population (population of cuckoos). These cuckoos have got
some eggs that will be laid in other species nests. Some of these eggs that look like
the hosts eggs are more probable to be raised and turned into cuckoos. Other eggs
are detected by the host and are demised. The rate of the raised eggs shows the
suitability of the area. If there are more eggs to be survived in an area, there is more
prots to that area. Thus the situation in which more eggs are survived will be a
parameter for the cuckoos to be optimized.
Cuckoos search for the best area to maximize their eggs life lengths. After
hatching and turning into mature cuckoos, they form societies and communities.
Each community has its habitat to live. The best habitat of all communities will be
the next destination for cuckoos in other groups. All groups immigrate to the best
current existing area. Each group will be the resident in an area near the best current
existing area. An egg laying radius (ELR) will be calculated regarding the number
of eggs each cuckoo lays and its distance from the current optimized area.
Start
Destroy cuckoos in
No Whether the population is lower than max value?
unsuitable areas
Find the nests with best rate of life
yes
yes
End
Afterwards, cuckoos start laying eggs randomly in the nests within their egg
laying radii. This process continues until reaching the best place for egg laying
(a zone with the most prot). This optimized zone is the place in which the maximum
number of cuckoos gathers together. Table 5.1 lists the characteristics of the COA.
The amount of prot or suitability rate for the current habitat can be obtained by
prot function evaluation. Thus
COA is an algorithm that maximizes the prot function. To use COA, the cost
function should be multiplied by a minus sign so that the problem could be solved.
To start optimization, a habitat matrix sized Npop Nvar is generated.
Afterwards, a number of random eggs are specied for each habitat matrix. Each
cuckoo lays 520 eggs in nature. These numbers are used as the maximum and
minimum limits in the egg specication of each cuckoo in different iterations. Each
real cuckoo lays eggs in a specic range. Thus, the maximum range of egg laying is
5 Cuckoo Optimization Algorithm (COA) 45
the ELR. In an optimization problem, with the upper and lower limits of varhi and
varlow, each cuckoo has an ELR which is proportionate to the total number of eggs,
current number of eggs, and the upper/lower limits of variables of the problem.
So ELR is dened as (Rajabioun 2011):
Each cuckoo randomly lays eggs in its host birds nest within its ELR. Figure 5.2
shows the egg laying radius or the maximum range of egg laying.
After all of the cuckoos lay their eggs, some of the eggs which are less similar to
the hosts eggs are recognized and thrown out. Thus after each egg laying process, p
% of all eggs (usually 10%) whose prot function value is less will be destroyed.
The rest of chicks in the hosts nest are fed and raised.
Another interesting point about cuckoo chicks is that only one egg has the
opportunity to be raised in each nest. Because when cuckoo chicks hatch, they
throw out the hosts eggs. If the hosts chicks hatch earlier, the cuckoo chick has
eaten the largest amount of food (because its body is three times larger and it knocks
other chicks over) and after several days the hosts own chicks will die from hunger
and only cuckoo chick will survive.
When cuckoo chicks grow up and become mature, they live in the surrounding
environment and in their communities for a while. But when the egg laying time is
close, they immigrate to better habitats in which the chances for the survival of their
eggs are higher. After forming cuckoo groups in various environments (search
space of the problem), the group with the best position will be selected as the target
group for other cuckoos for immigration.
It is difcult to recognize which group each cuckoo belongs when mature
cuckoos live in several environment zones. To solve this problem, classication of
cuckoos is done by K-means clustering (a number of K between 3 and 5 sufces).
The average prot of a group is calculated after all groups are formed to obtain
the relative optimality of the living area of each group. Afterwards, the group with
the highest value of average optimization will be selected as the target group and all
others will immigrate toward this group. While immigrating toward the target point,
cuckoos will not fly the whole way to the target place. They just pass a portion of
the distance and they may even digress from the target too. This movement is
shown in Fig. 5.3.
As shown in Fig. 5.3, each cuckoo only flies k% of the entire distance toward
the current ideal target and has a deviation of u too. These two parameters help
cuckoos search more space. k is a random number between 0 and 1 and u is a
number from p/6 to p/6. When all cuckoos immigrate to the target point and their
habitat points are determined, each cuckoo has a number of eggs. An ELR is
determined for each cuckoo and then egg laying is started.
According to the fact that there is always a balance among the populations of birds
in nature, a number Nmax is used to control the maximum number of cuckoos that
can live in a place. This balance is due to competing for limited foods, being hunted
by predators, and nding improper nests for eggs.
After several repetitions, all cuckoos will attain a point of optimization with the
maximum similarity of their eggs to the hosts eggs and with the maximum food
sources. This location has the most total prot and the least chance for the eggs to
be ruined. Convergence of more than 95% of all cuckoos toward one point will
nalize the optimization process. The main steps of COA are shown in the fol-
lowing pseudo code:
48 S. Jafari et al.
5.10 Conclusion
The cuckoo optimization algorithm inspired by the life style of cuckoos was
explained. The cuckoos specic and unique feature in egg laying and raising is the
basis of this algorithm. In COA, each cuckoo has a habitat in which eggs are laid. If
the eggs survive, they are raised and become mature. Afterwards, they immigrate to
the best habitat found for reproduction. The variety associated with cuckoos
movement toward the target habitat, provides more space for search. This algorithm
is considered as a successful imitation of nature and is suitable for optimization
problems in different elds.
References
Balochian, S., & Ebrahimi, E. (2013). Parameter optimization via cuckoo optimization algorithm
of fuzzy controller for liquid level control. Journal of Engineering, 2013.
Kahramanli, H. (2012). A modied cuckoo optimization algorithm for engineering optimization.
International Journal of Future Computer and Communication, 1(2), 199.
Khajeh, M., & Golzary, A. R. (2014). Synthesis of zinc oxide nanoparticleschitosan for
extraction of methyl orange from water samples: Cuckoo optimization algorithmarticial
neural network. Spectrochimica Acta Part A, 131, 189194.
Khajeh, M., & Jahanbin, E. (2014). Application of cuckoo optimization algorithmarticial neural
network method of zinc oxide nanoparticleschitosan for extraction of uranium from water
samples. Chemometrics and Intelligent Laboratory Systems, 135, 7075.
Mellal, M. A., & Williams, E. J. (2015a). Cuckoo optimization algorithm for unit production cost
in multi-pass turning operations. The International Journal of Advanced Manufacturing
Technology, 76(14), 647656.
Mellal, M. A., & Williams, E. J. (2015b). Cuckoo optimization algorithm with penalty function for
combined heat and power economic dispatch problem. Energy, 93, 17111718.
5 Cuckoo Optimization Algorithm (COA) 49
Mellal, M. A., & Williams, E. J. (2016). Parameter optimization of advanced machining processes
using cuckoo optimization algorithm and hoopoe heuristic. Journal of Intelligent
Manufacturing, 27(5), 927942.
Moezi, S. A., Zakeri, E., Zare, A., & Nedaei, M. (2015). On the application of modied cuckoo
optimization algorithm to the crack detection problem of cantilever Euler-Bernoulli beam.
Computers & Structures, 157, 4250.
Rabiee, M. and Sajedi, H. (2013). Job scheduling in grid computing with cuckoo optimization
algorithm. International Journal of Computer Applications, 62(16).
Rajabioun, R. (2011). Cuckoo optimization algorithm. Elsevier, 11(8), 55085518.
Singh, U., & Rattan, M. (2014). Design of linear and circular antenna arrays using cuckoo
optimization algorithm. Progress in Electromagnetics Research C, 46, 111.
Shadkam, E., & Bijari, M. (2014). Evaluation the efciency of cuckoo optimization algorithm.
International Journal on Computational Sciences and Applications (IJCSA), 4, 3947.
Shokri-Ghaleh, H., & Al, A. (2014). Optimal synchronization of teleoperation systems via
cuckoo optimization algorithm. Nonlinear Dynamics, 78(4), 23592376.
Chapter 6
Teaching-Learning-Based Optimization
(TLBO) Algorithm
6.1 Introduction
electrical, and environmental engineering. For example, Rao and Kalyankar (2012)
used the TLBO algorithm for mechanical design optimization problems. Toan
(2012) optimized the design of planar steel frames by using the TLBO algorithm.
Baghlani and Makiabadi (2013) used the TLBO algorithm to optimize the design of
truss structures. Garca and Mena (2013) presented an optimum design of dis-
tributed generation by using a modied version of the TLBO algorithm. Roy (2013)
and Roy et al. (2013) obtained an optimum solution to a hydrothermal scheduling
problem for hydropower plants by the TLBO algorithm. Sultana and Roy (2014)
applied the TLBO algorithm to minimize power loss and energy cost in power
distribution systems. Bouchekara et al. (2014) applied the TLBO algorithm to solve
the power flow problem. Ji et al. (2014) applied a modied TLBO algorithm to
improve the forecast accuracy of water supply system operation. Bayram et al.
(2015) used the TLBO algorithm to predict the concentrations of dissolved oxygen
in surface water. Thus, the TLBO algorithm has a variety of applications because it
is easy to use and convenient to be adapted for different problems.
Imagine a classroom where there are two major groups: a teacher who teaches the
class and some students who are learning. The responsibility of the teacher is to
improve the knowledge level of the whole class to lead to better performance of
students in exams. In the teacher phase, the teacher tries to transfer the knowledge
to the learners. Thus, the teacher has a high knowledge level in the classroom and
endeavors to raise the level of class. Suppose that there are n students (j = 1, 2, ,
n) in a classroom whose average grade in exam i is Mi and the best learner who
achieves the best grade XT,i is supposed to be the teacher. The difference between
the classroom average grade (Mi) and the best grade (XT,i) can be computed by
where Diff i = difference between the average grade and the best grade; ri = random
number in 0; 1 in iteration i; XT;i = grade of the best learner (teacher) in iteration i;
TF = teacher factor which depends on teaching quality and is either 1 or 2; and
Mi = average of learners grades in iteration i.
Then, by using Diff i , the new grade of student j in iteration i can be expressed as
0
Xj;i Xj;i Diffi 6:3
0
where Xj;i = new grade of student j in iteration i and Xj;i = old grade of student j in
0 0
iteration i. If Xj;i is better than Xj;i , Xj;i will go through to the learner phase.
Otherwise, Xj;i will go through to the learner phase.
start
Teacher phase
Computing of X'j
Learner phase
Yes
0 0 0 0 0
00 XA;i ri XA;i XB;i if XA;i [ XB;i
XA;i 0 0 0 0 0 6:4
XA;i ri XB;i XA;i if XB;i [ XA;i
00 0 00 0
If XA;i is better than XA;i , XA;i will go through the next iteration. Otherwise, XA;i
will go through the next iteration.
The steps of the working method of TLBO are summarized as follows:
(1) Initialize the grades (Xj;i ) of n students (population size) randomly for iteration i.
(2) Evaluate the objective function for n students.
(3) Select the best objective function (teacher) and compute Diff i by using
Eq. (6.1).
0
(4) Compute Xj;i for n students and iteration i by using Eq. (6.3).
0
(5) Compare Xj;i with Xj;i . The better one goes to next step and the other one is
removed.
00
(6) Select pairs of the students randomly and compare with each other. Then Xj;i is
computed for every student by using Eq. (6.4).
00 0
(7) Compare Xj;i with Xj;i . The better one goes to next step and the other one is
removed.
(8) Evaluate the objective function for n students. Check if the stop criteria are
satised. If yes, the best solution is achieved; otherwise, go back to Step (3).
The flowchart of the TLBO algorithm is shown in Fig. 6.1 to comprehend the
working process of TLBO better and the algorithms characteristics are shown in
Table 6.1.
6.4 Conclusion
References
Roy, P. K., Sur, A., & Pradhan, D. K. (2013). Optimal short-term hydro-thermal scheduling using
quasi-oppositional teaching learning based optimization. Engineering Applications of Articial
Intelligence, 26(10), 25162524.
Sultana, S., & Roy, P. K. (2014). Optimal capacitor placement in radial distribution systems using
teaching learning based optimization. Electrical Power and energy systems, 54, 387398.
Toan, V. (2012). Design of planar steel frames using teaching-learning based optimization.
Engineering Structures, 34, 225232.
Chapter 7
Flower Pollination Algorithm (FPA)
7.1 Introduction
The flower pollination algorithm (FPA) was proposed by Yang (2012) for global
optimization. This new metaheuristic algorithm is inspired by the pollination
phenomenon of flowing plants in nature. Yang et al. (2013) used the eagle strategy
with FPA to balance exploration and exploitation. Sharawi et al. (2014) employed
FPA in a wireless sensor network for efcient selection cluster heads and compared
with the Low-Energy Adaptive Clustering Hierarchy (LEACH). The results indi-
cated that FPA outperformed the LEACH. Sakib et al. (2014) used FPA and the bat
algorithm (BA) to solve continuous optimization problems. They tested and com-
pared the two algorithms on the benchmark functions. Emary et al. (2014) applied
FPA to a retinal vessel segmentation optimization problem. Platt (2014) used FPA
in the calculation of dew point pressure in a system that exhibited double retrograde
vaporization. El-henawy and Ismail (2014) combined FPA with the particle swarm
optimization (PSO) algorithm to solve large integer programming problems and
demonstrated that FPA was useful for betterment searching accuracy. Abdel-Raouf
et al. (2014) formulated Sudoku puzzles as an optimization problem, and then
employed a hybrid optimization method, flower pollination algorithm with the
Chaotic Harmony Search (FPCHS) to obtain the optimal solutions. Yang et al.
(2014) used a novel version of FPA to solve several multi-objective test functions.
Trivedi et al. (2015) used FPA for optimization of relay coordination in a wide
electrical network with the aim of increasing the selectivity and at the same time
reducing the fault clearing time to improve reliability of the system. Bensouyad and
Saidouni (2015) applied the discrete flower pollination algorithm (DFPA) for
solving a graph coloring problem. Lukasik and Kowalski (2015) tested FPA for a
number of continuous benchmark problems. Dubey et al. (2015) applied a modied
flower pollination algorithm in the modern power systems to nd out the solutions
to economic dispatch problems solutions. They added a scaling factor to control the
local pollination and compression of the exploitation stage to achieve the best
solution. Bibiks et al. (2015) used DFPA in order to solve combinatorial opti-
mization problems. Alam et al. (2015) applied the FPA technique for determining
optimal parameters of a single diode and two diodes that were used to describe
photovoltaic systems. In the design of a structural system, the optimal values of
design variables cannot be obtained analytically and a structural engineering
problem has different design constraints, so optimization is an important part of the
structural design process. For this purpose Nigdeli et al. (2016) used FPA to solve
structural engineering problems related to pin-jointed plane frames, truss systems,
deflection minimization of I-beams, tubular columns, and cantilever beams. Nabil
(2016) developed a Modied Flower Pollination Algorithm (MFPA) from the
hybridization FPA with the Clonal Selection Algorithm (CSA) and performed tests
on 23 optimization benchmark problems to investigate the efciency of the new
algorithm. Then, the results of MFPA were compared with those of Simulated
Annealing (SA), Genetic Algorithm (GA), FPA, Bat Algorithm (BA), and Firefly
Algorithm (FA). The results showed that the proposed MFPA was able to nd more
accurate solutions than FPA and the four other algorithms. Abdelaziz et al. (2016)
applied FPA to drive the optimal sizing and allocations of the capacitors in different
water distribution systems.
7 Flower Pollination Algorithm (FPA) 61
where xti = pollen or solution vector at iteration t; g = the current best solution
among all current generation solutions; c = a scale factor for controlling step size;
and L = strength of pollination, which is a step size related to the levy distribution.
Levy flight is a bunch of random processes where the length of each jump follows
the levy probability distribution function and has innite variance. Following Yang
(2012), L for a levy distribution is given by:
k Ck sin pk 1
L 2
1 k S S0 0; 7:2
p S
where xtj and xtk = two pollens from different flowers of the same plant.
Mathematically, if xtj and xtk come from the same species or are selected from the
same population, this becomes a local random walk if has a uniform distribution
in [0,1].
Table 7.1 lists the characteristics of the FPA and Fig. 7.1 shows the flowchart of
the FPA.
The size of the population of solutions (n), the scale factor for controlling step size
c, the levy distribution parameter Lk, and the switch probability (P) are
user-dened parameters in the FPA. Determining the optimal parameters of the
FPA is a time-consuming work and needs to run the algorithm many times. It
should be noted that the optimal parameters of the algorithm for one problem are
different from those of other problems. Considering a mixture of parameters is an
appropriate method for nding the suitable values of the algorithm parameters. The
algorithm can be done for several times for one mixture of parameters, and the
similar process can be repeated for other mixtures of parameters. Finally, the results
for different sets of parameters can be compared and the best value can be deter-
mined. Yang (2012) suggested to start the modeling with P = 0.5 and k = 1.5.
64 M. Azad et al.
start
No Yes
Rand > p
No
t > maximum number of
t=t+1
iteration
Yes
End
7.6 Conclusion
This chapter described the flower pollination algorithm (FPA), which is based on
the pollination phenomenon of flowing plants in nature. The chapter presented a
summary of the FPA applications in different problems. Then, the natural process of
pollination, and the flower pollination algorithm and a pseudocode of the FPA are
presented.
References
Abdelaziz, A. Y., Ali, E. S., & Abd Elazim, S. M. (2016). Optimal sizing and locations of
capacitors in radial distribution systems via flower pollination optimization algorithm and
power loss index. Engineering Science and Technology, 19(1), 610618.
Abdel-Raouf, O., & Abdel-Baset, M. (2014). A new hybrid flower pollination algorithm for
solving constrained global optimization problems. International Journal of Applied
Operational Research-An Open Access Journal, 4(2), 113.
Abdel-Raouf, O., El-Henawy, I., & Abdel-Baset, M. (2014). A novel hybrid flower pollination
algorithm with chaotic harmony search for solving sudoku puzzles. International Journal of
Modern Education and Computer Science, 6(3), 38.
Alam, D. F., Yousri, D. A., & Eteiba, M. B. (2015). Flower pollination algorithm based solar PV
parameter estimation. Energy Conversion and Management, 101, 410422.
Bekda, G., Nigdeli, S. M., & Yang, X. S. (2015). Sizing optimization of truss structures using
flower pollination algorithm. Applied Soft Computing, 37, 322331.
Bibiks, K., Li, J. P., & Hu, F. (2015). Discrete flower pollination algorithm for resource
constrained project scheduling problem. International Journal of Computer Science and
Information Security, 13(7), 8.
Dubey, H. M., Pandit, M., & Panigrahi, B. K. (2015). A biologically inspired modied flower
pollination algorithm for solving economic dispatch problems in modern power systems.
Cognitive Computation, 7(5), 594608.
El-henawy, I., & Ismail, M. (2014). An improved chaotic flower pollination algorithm for solving
large integer programming problems. International Journal of Digital Content Technology and
its Applications, 8(3).
Emary, E., Zawbaa, H. M., Hassanien, A. E., Tolba, M. F., & Snel, V. (2014). Retinal vessel
segmentation based on flower pollination search algorithm. In Proceedings of the Fifth
International Conference on Innovations in Bio-Inspired Computing and Applications IBICA,
2014 (pp. 93100). Springer International Publishing.
ukasik, S., & Kowalski, P. A. (2015). Study of flower pollination algorithm for continuous
optimization. In Intelligent Systems, 2014 (pp. 451459). Springer International Publishing.
Nabil, E. (2016). A modied flower pollination algorithm for global optimization. Expert Systems
with Applications, 57, 192203.
Nigdeli, S. M., Bekda, G., & Yang, X. S. (2016). Application of the flower pollination algorithm
in structural engineering. In Metaheuristics and optimization in civil engineering (pp. 2542).
Springer International Publishing.
Platt, G. M. (2014). Computational experiments with flower pollination algorithm in the
calculation of double retrograde dew points. International Review of Chemical Engineering, 6
(2), 9599.
Sakib, N., Kabir, M. W. U., Subbir, M., & Alam, S. (2014). A comparative study of flower
pollination algorithm and bat algorithm on continuous optimization problems. International
Journal of Soft Computing and Engineering, 4(2014), 1319.
7 Flower Pollination Algorithm (FPA) 67
Sharawi, M., Emary, E., Saroit, I. A., & El-Mahdy, H. (2014). Flower pollination optimization
algorithm for wireless sensor network lifetime global optimization. International Journal of
Soft Computing and Engineering, 4(3), 5459.
Trivedi, I. N., Purani, S. V., & Jangir, P. K. (2015). Optimized over-current relay coordination
using Flower Pollination Algorithm. In Advance Computing Conference (IACC), 2015 IEEE
International (pp. 7277). IEEE.
Yang, X. S. (2012). Flower pollination algorithm for global optimization. In International
Conference on Unconventional Computing and Natural Computation (pp. 240249). Berlin:
Springer.
Yang, X. S., Karamanoglu, M., & He, X. (2014). Flower pollination algorithm: A novel approach
for multiobjective optimization. Engineering Optimization, 46(9), 12221237.
Yang, X. S., Deb, S., & He, X. (2013). Eagle strategy with flower algorithm. In 2013 International
Conference on Advances in Computing, Communications and Informatics (ICACCI)
(pp. 12131217). IEEE.
Chapter 8
Krill Herd Algorithm (KHA)
Abstract The krill herd algorithm (KHA) is a new metaheuristic search algorithm
based on simulating the herding behavior of krill individuals using a Lagrangian
model. This algorithm was developed by Gandomi and Alavi (2012) and the pre-
liminary studies illustrated its potential in solving numerous complex engineering
optimization problems. In this chapter, the natural process behind a standard KHA
is described.
8.1 Introduction
Although the basic principles of these algorithms are similar and contain an
iterative mechanism, the iteration process differs in each algorithm. The main
objective of such a process is to search through the decision space for arrays of
decision variables that produce an optimum result. This process is usually inspired
by the natural phenomena, and is intended to imitate a natural feature that has been
evolved over millions of years (Gandomi and Alavi 2012). Consequently, there is
no limitation to the source of inspiration for these bio-inspired algorithms, and they
can imitate a vast domain of features, from the genetic evolution process of a
species to the foraging mechanism of bacteria. Swarm intelligence, which is an
imitation of an animal groups behavior, could serve as an inspiration source to
develop such algorithms.
Many studies have focused on capturing the underlying mechanism that governs
the development of formation grouping of various species of marine animals,
including the Antarctic krill (Flierl et al. 1999). The krill herds are aggregations
with no parallel orientation in both temporal and spatial scales (Brierley and Cox
2010). These creatures that can form large swamps are the source of inspiration for
the krill herd algorithm (KHA). The herding of the krill individuals is a
multi-objective process, including two main goals: (1) increasing krill density, and
(2) reaching the food. Density-dependent attraction of krill (increasing density) and
nding food (areas of high food concentration) are used as objectives, which nally
cause krill to herd around the global optima. In this process, an individual krill
moves toward the best solution when it searches for the highest density and food.
The imaginary distance of krill individuals serve as objective functions, and min-
imizing them is the priority of the optimization process. Hence the closer the
distance to the high density and food, the less the objective function (better)
(Gandomi and Alavi 2012).
The engineering optimization problems, which mostly have a nonlinear decision
space, are complicated, due to their numerous decision variables and complex
constraints. Such conditions can be regarded as an advantage for the metaheuristic
algorithms over the conventional optimization techniques. KHA is a new and novel
metaheuristic search algorithm based on the herding behavior of krill individuals,
using a Lagrangian model at its core. This algorithm was rst introduced by
Gandomi and Alavi (2012), and the preliminary studies have demonstrated its
potential to outperform the existing algorithms for solving the complicated engi-
neering problems (Gandomi et al. 2013a, b). Additionally, KHA was further vali-
dated for various engineering problems, including optimal design of civil structures
(Gandomi et al. 2013a, b; Gandomi and Alavi 2016), power flow optimization
(Mukherjee and Mukherjee 2015), and optimum operation of power plants (Mandal
et al. 2014). Similarly, studies have illustrated that the power of the classical KHA
tends towards global exploration (Bolaji et al. 2016). Some modications have been
made to the standard KHA, and the modied algorithms include: chaotic-particle
swarm krill herd (CPKH) (Wang et al. 2013), fuzzy krill herd algorithm
(FKH) (Fattahi et al. 2016), and discrete-based krill herd algorithm (DKH) (Bolaji
et al. 2016). As a compatible and efcient algorithm, KHA can be a promising
alternative for solving engineering optimization problems.
8 Krill Herd Algorithm (KHA) 71
The basic core of a standard KHA is its krill herding simulator. The krill herd
defuses after a hypothetical attack from a predator. This is the initial step in the
standard KHA. Each krill after such an event has two priorities, which are
decreasing its distance from both the food source and the highest density of the krill
swarm. These imaginary distances are acting as the objective function, and mini-
mizing these distances is considered as the goal of each krill individual.
Consequently, the time-dependent position of an individual krill is governed by the
motion induced by other krill individuals (Ni), foraging motion (Fi), and physical
diffusion (Di). As any efcient optimization algorithm should be compatible with
arbitrary dimensions, since each arbitrary dimension is to represent a decision
variable, the following Lagrangian model is generalized for an n-dimensional
decision space (Gandomi and Alavi 2012):
dXi
Ni Fi Di 8:1
dt
Theoretically speaking, the krill herd has a tendency to move in a group. In other
words, forming a high-density swarm is considered as an advantage for the krill
community. Thus, krill individuals try to maintain a high density and move due to
their mutual effects. The motion induced by the krill herd can be expressed as
(Gandomi and Alavi 2012):
in which Ninew = motion of the ith krill individual induced by the krill herd at the
current iteration; Nmax = maximum induced speed (according to the measured
values, it is around 0.01 m/s) (Hofmann et al. 2004); ai = direction of motion of the
ith krill individual induced by the krill swamp; wn = inertia weight of the motion
induced in the range [0,1]; and Niold = motion of the ith krill individual induced by
the krill herd in the previous iteration. wn is one of the model parameters and it acts
as a weight for the previously calculated motion induced by the krill herd. A lower
value of wn decreases the influence of the Niold .
72 B. Zolghadr-Asli et al.
The direction of motion of the ith krill individual induced by the krill swamp
(ai), however, is influenced by both the nearby krill individuals (local effect) and the
target swarm density (target effect), and it is given by Gandomi and Alavi (2012):
ai alocal
i atarget
i 8:3
X
NN
alocal
i b i;j X
K b i;j 8:4
j1
b i;j Xj X
X i 8:5
Xj Xi e
b i;j Ki Kj
K 8:6
K worst K best
in which X b i;j = local effect induced by the jth neighboring krill individual for the
ith krill individual; K b i;j = target direction effect provided by the best krill indi-
vidual; Ki and Kj = tness values of the ith and jth krill individuals, respectively;
Kbest and Kworst = best and worst tness values for krill individuals, respectively;
and NN = number of neighboring krill individuals for the ith krill.
Equation (8.5) represents the unit vector that connects the ith krill to the jth krill,
while Eq. (8.6) calculates the normalized tness value, which plays the role of a
weight for the unit vector in Eq. (8.5). In fact, each calculated X b i;j K
b i;j char-
acterizes the effect
of the jth neighboring krill. This influence can be (1) an
b
attractive one K i;j [ 0 , which indicates that both krill individuals are moving
toward one another; (2) a repulsive one K b i;j \0 , which refers to a situation
where bothkrill individuals are moving away from each other; and (3) a futile one
b i;j 0 , which suggests that both krill individuals are incurious toward one
K
another. The summation of these weighted vectors shows the influence of the
neighboring krill individuals on the motion induced by the ith krill.
To choose the number of neighboring krill individuals for any given krill, dif-
ferent strategies can be implemented. For instance, a neighboring ratio can be
simply dened to nd the number of the closest krill individuals. Reportedly, using
the actual behavior of the krill individuals suggests that a sensing distance (ds) is a
proper value to determine the neighboring krill individuals (Fig. 8.1). The sensing
distance for each krill individual in each iteration can be determined by Gandomi
and Alavi (2012):
1 XN
Xi Xj
ds;i 8:7
5N j1
8 Krill Herd Algorithm (KHA) 73
Fig. 8.1 Schematic representation of the sensing ambit around a krill individual
atarget
i
b i;best X
C best K b i;best 8:8
in which atarget
i = effective coefcient of the krill individuals with the best tness to
the ith krill individual. atarget
i leads the solution to the probable location of global
optima and hence it should be more effective than other krill individuals such as
neighboring krill individuals. Herein, Cbest in Eq. (8.8) is dened as (Gandomi and
Alavi 2012):
74 B. Zolghadr-Asli et al.
I
C best
2 rand 8:9
Imax
where rand = a random value in the range of [0,1] and it has a uniform distribution;
I = current iteration number; and Imax = maximum number of iterations.
Equation (8.9) suggests that the effect of the target krill is enhanced in each
iteration.
The foraging motion, which is centered around the krill herds tendency to nd
nutrition, has two terms in its structure: (1) the location of food, and (2) the previous
experiences about the food location encountered by each individual krill. The above
mechanism can be formulated for each individual krill as follows (Gandomi and
Alavi 2012):
Fi Vf bi xf Fiold 8:10
where
bi bfood
i bbest
i 8:11
P
N
1
Ki Xi
i1
X food
8:12
P
N
1
Ki
i1
bfood
i
b i;food X
Cfood K b i;food 8:13
The main reason behind the food attraction is to ensure that the krill swarm nds
the global optima. As a result, when the krill herd is randomly spread through the
decision space, this motion can help the herd to gather around the plausible location
of the food (global optima). However, as the searching process advances in each
iteration, the herd must be able to spread in a limited space, to locate the best
solution. Thus, as shown in the formula, this motion decreases with time. This can
be considered as an efcient global optimization strategy that helps improve the
efciency of KHA.
Each individual krill is also moving due to its visited memory of the previously
spotted locations of food. The effect of the best tness of the ith krill individual can
also be expressed as (Gandomi and Alavi 2012):
bbest
i
b i;best X
K b i;best 8:15
in which K(i,best) = best previously encountered position of the ith krill individual.
The two mechanisms behind inducing motion to each individual krill (motion
induced by the krill herd and foraging motion) are to ensure that after the initial
separation of the krill herd throughout the decision space, the herd gathers around
what is considered to be the global optima. Yet, to ensure that the decision space is
inspected thoroughly by the krill herd, a random process is required to spread an
enough number of krill individuals in the decision space, in a random-based manner.
If the random process is too strong, the herd will not gather around a center location;
yet, lack of such a mechanism could interrupt a proper search throughout the
decision space. The physical diffusion term is introduced in the KHA as a random
process. This motion can be expressed in terms of maximum diffusion speed, a
random direction vector, and a mathematical mechanism to ensure the decreasing
effects of this term as searching for the global optimal solution continues. Thus, the
physical diffusion term can be formulated as follows (Gandomi and Alavi 2012):
76 B. Zolghadr-Asli et al.
I
Di D max
1 d 8:16
Imax
in which Dmax = maximum diffusion speed, which has the range of [0.002, 0.010]
(m/s) (Gandomi and Alavi 2012); and d = random directional vector and its arrays
are random values between 1 and 1. A random selection can also be employed to
determine the value of Dmax. The physical diffusion motion introduced in Eq. (8.16)
works on the basis of a geometrical annealing schedule, and the random speed
linearly decreases with time.
The above three mechanisms allow one to calculate the direction and speed of
relocation for each individual krill at any given iteration. In other words, the dened
motions frequently change the position of a krill individual toward the position that
is expected to be the best one. The motion induced by other krill individuals and the
foraging motion are working in parallel, which resultantly makes the KHA a
potentially powerful algorithm for solving complex optimization problems.
The KHA formulation suggests that if any of Kj, Kbest, Kfood, and Kifood can illustrate
a better performance than the ith krill individual, they can have an attractive effect,
which can inspire this krill to move toward any of these locations, in the hope that
such an action would improve its tness value. Such a mechanism can also have a
negative effect. The Kj, Kbest, Kfood, and Kifood can repulse the ith krill individual,
causing it to move away from the aforementioned locations. Additionally, the
physical diffusion can spread the krill herd throughout the decision space for a
comprehensive search of the plausible arrays of decision variables. After calculating
the motion for every krill in the herd, the position vector of the ith krill individual
during the time interval from t to t + Dt is given by
dXi
Xi t Dt Xi t Dt 8:17
dt
It should be noted that t is one of the most important model parameters since it
works as a scale factor for the speed vector. Thus, it should be carefully set for the
optimization problem. Suggestively, t, which completely depends on the search
space, can be estimated by Gandomi and Alavi (2012):
X
NV
Dt Ct UBj LBj 8:18
j1
in which NV = number of variables; and LBj and UBj = lower and upper bounds of
the jth variable, respectively. It is empirically found that Ct is a constant within
8 Krill Herd Algorithm (KHA) 77
(0,2]. It is also obvious that the low values of Ct let the krill individual search the
space in a slower, yet more careful pace. One should bear in mind that, this
parameter is the most important parameter of the model, and thus, needs to be
carefully adapted to each optimization problem.
Finally, it should be pointed out that although the above-mentioned mechanisms
are the core concept of a standard KHA, this algorithm is compatible to implement
a few external searching operators, including but not limited to genetic operators
such as crossover and mutation. While this capability surely enhances the perfor-
mance of the standard KHA, their presence is not obligatory (Gandomi and Alavi
2012). A basic representation of the KHA is shown in Fig. 8.2. Additionally,
Table 8.1 summarizes the characteristics of the standard KHA.
Begin
Set the inertia weight of the motion induced (n) and foraging motion (f)
Define t
For i = 1: N
Foraging activity
Physical diffusion
Sort the population/krill from best to worst and find the current best
End while
End
8 Krill Herd Algorithm (KHA) 79
8.8 Conclusion
This chapter described the krill herd algorithm (KHA), which is a novel, yet rel-
atively newly introduced metaheuristic optimization algorithm. After a brief review
of the vast applications of KHA, including complex engineering optimization
problems, the standard KHA and its mechanism were described. In the nal section,
a pseudo code of the standard KHA was also presented.
References
Bolaji, A. L. A., Al-Betar, M. A., Awadallah, M. A., Khader, A. T., & Abualigah, L. M. (2016).
A comprehensive review: Krill Herd algorithm (KH) and its applications. Applied Soft
Computing, 49, 437446.
Brierley, A. S., & Cox, M. J. (2010). Shapes of krill swarms and sh schools emerge as aggregation
members avoid predators and access oxygen. Current Biology, 20(19), 17581762.
Fattahi, E., Bidar, M., & Kanan, H. R. (2016). Fuzzy krill herd (FKH): An improved optimization
algorithm. Intelligent Data Analysis, 20(1), 153165.
Flierl, G., Grnbaum, D., Levins, S., & Olson, D. (1999). From individuals to aggregations: The
interplay between behavior and physics. Journal of Theoretical Biology, 196(4), 397454.
Gandomi, A. H., & Alavi, A. H. (2012). Krill herd: A new bio-inspired optimization algorithm.
Communications in Nonlinear Science and Numerical Simulation, 17(12), 48314845.
Gandomi, A. H., & Alavi, A. H. (2016). An introduction of krill herd algorithm for engineering
optimization. Journal of Civil Engineering and Management, 22(3), 302310.
Gandomi, A. H., Alavi, A. H., & Talatahari, S. (2013a). Structural optimization using krill herd
algorithm. Chapter 15 in swarm intelligence and bio-inspired computation: Theory and
applications. London, UK: Elsevier Publication.
Gandomi, A. H., Talatahari, S., Tadbiri, F., & Alavi, A. H. (2013b). Krill herd algorithm for
optimum design of truss structures. International Journal of Bio-Inspired Computation, 5(5),
281288.
Gandomi, A. H., Yang, X. S., & Alavi, A. H. (2013c). Cuckoo search algorithm: A metaheuristic
approach to solve structural optimization problems. Engineering with Computers, 29(1), 1735.
Hofmann, E. E., Haskell, A. E., Klinck, J. M., & Lascara, C. M. (2004). Lagrangian modelling
studies of Antarctic krill (Euphausia superba) swarm formation. ICES Journal of Marine
Science, 61(4), 617631.
Mandal, B., Roy, P. K., & Mandal, S. (2014). Economic load dispatch using krill herd algorithm.
International Journal of Electrical Power & Energy Systems, 57, 110.
Mukherjee, A., & Mukherjee, V. (2015). Solution of optimal power flow using chaotic krill herd
algorithm. Chaos, Solitons & Fractals, 78, 1021.
Price, H. J. (1989). Swimming behavior of krill in response to algal patches: A mesocosm study.
Limnology and Oceanography, 34(4), 649659.
Wang, G. G., Gandomi, A. H., & Alavi, A. H. (2013). A chaotic particle-swarm krill herd
algorithm for global numerical optimization. Kybernetes, 42(6), 962978.
Chapter 9
Grey Wolf Optimization
(GWO) Algorithm
Abstract This chapter describes the grey wolf optimization (GWO) algorithm as
one of the new meta-heuristic algorithms. First, a brief literature review is presented
and then the natural process of the GWO algorithm is described. Also, the opti-
mization process and a pseudo code of the GWO algorithm are presented in this
chapter.
9.1 Introduction
algorithm by removing weak individuals from the society. Comparison with the
basic GWO illustrated that the proposed algorithm had a better performance in
conversion rate and exploration, and also avoided trapping into local optima.
Sulaiman et al. (2015) used GWO to solve an optimal reactive power dispatch
(ORPD) problem and compared with swarm intelligence (SI), evolutionary com-
putation (EC), PSO, harmony search algorithm (HAS), gravity search algorithm
(GSA), invasive weed optimization, and modied imperialist competitive algorithm
with invasive weed optimization (MICA-IWO). The results demonstrated that
GWO had more desirable optimal solution than others.
GWO is inspired by social hierarchy and the intelligent hunting method of grey
wolves. Usually, grey wolves are at the top of the food chain in their life areas. Grey
wolves mostly live in a pack of 512 individuals. In particular, in grey wolves life
there is a strict social hierarchy. As shown in Fig. 9.1, the leaders of a pack of grey
wolves (alpha) are a male and female wolves that often are responsible for making
decisions for their pack such as sleep place, hunting, and wake-up time. Mostly,
other individuals of the pack must obey the decision made by alpha. However,
some democratic behaviors in the social hierarchy of grey wolves can be observed
(alpha may follow other individuals of the pack). In gatherings, individuals conrm
the alphas decision by holding their tails down. It is also interesting to know that it
9 Grey Wolf Optimization (GWO) Algorithm 83
is not necessary for the alpha to be the strongest ones in the pack. Managing the
pack is the main role of the alpha. In a pack of grey wolves, discipline and orga-
nization are the most important. The level next to alpha in the social hierarchy of
grey wolves is beta and the role of beta is to help alpha in making decisions. Beta
can be either male or female wolves and beta can be the best candidate of substi-
tution for alpha when one of them becomes old or dies. The beta must respect alpha,
but he/she can command other individuals. Beta is the consultant of alpha and
responsible for disciplining the pack. The beta reinforces the orders of alpha and
gives alpha the feedbacks. The weakest level in a pack of grey wolves is omega that
plays a role of scapegoat. The wolves at the level of omega have to obey other
individuals orders and they are the last wolves that are allowed to eat food. Omega
seems to be the least important individuals in the pack, but without omega, internal
ght and other problems can be observed. This can be attributed to the omegas
venting role of violence and frustration of other wolves, which helps satisfy other
individuals and maintain the dominant structure of grey wolves. Sometimes, omega
plays the role of babysitter in the pack. The remaining wolves, other than alpha,
beta, and omega, are called subordinate (delta). The wolves at the level of delta
obey the alpha and beta wolves and dominate the omega wolves. They act as
scouts, sentinels, elders, hunters, and caretakers in the pack. Scouts are responsible
for looking after boundaries and territory and also they should alarm the pack in
facing to danger. Sentinels are in charge of security establishment. Elders are the
experienced wolves that are candidates for alpha and beta. Hunters help alpha and
beta in hunting and preparing food for the pack, while caretakers should look after
the weak, ill, and wounded wolves.
In addition to the social hierarchy in a pack of grey wolves, group hunting is one
of the interesting social behaviors of grey wolves too. According to Muro et al.
(2011) grey wolves hunting includes the following three main parts:
(1) Tracking, chasing, and approaching the prey.
(2) Pursuing, encircling, and harassing the prey till it stops moving.
(3) Attacking the prey.
These two social behaviors of grey wolves pack (social hierarchy and hunting
technique) are modeled in the GWO algorithm.
In this section, mathematical modeling of the social hierarchy of grey wolves, and
their hunting technique (tracking, encircling, and attacking prey) in the GWO
algorithm is detailed.
84 H. Rezaei et al.
In order to mathematically model the social hierarchy of grey wolves in the GWO
algorithm, the best solution is considered as alpha a. Therefore, the second and
third best solutions are respectively considered as beta b and delta d, and other
solution is assumed to be omega x. In the GWO algorithm, hunting (optimiza-
tion) is guided by a, b, and d, and x wolves follow them.
As aforementioned, grey wolves in the process of hunting, encircle the prey. The
grey wolves encircling behavior to hunt for a prey can be expressed as (Mirjalili
et al. 2014):
! ! !
D C : X p t Xt 9:1
! ! !!
X t 1 X p t A : D 9:2
! ! !
where t iteration number; A and C coefcient vectors; X P vector of the
! !
preys positions; X vector of the grey wolfs positions; and D calculated
! !
vector used to specify a new position of the grey wolf. A and C can be calculated
by Mirjalili et al. (2014):
!
A 2!
a :r! !
1 a 9:3
!
C 2:r!
2 9:4
where ! a vector set to decrease linearly from 2 to 0 over the iterations; and
!
r1 and ! r 2 random vectors in [0,1]. As shown in Fig. 9.2, a grey wolf at x; y
can change its position based on the position of prey at x0 ; y0 . Different places to
the best agent can be achieved with respect to the current position by regulating the
! ! ! !
A and C . For instance, by setting A 1; 0 and C 1; 1, the position of the
grey wolf is updated to x0 x; y0 .
Note that the random ! r1 and ! r2 vectors let the grey wolf select any
positions/nodes in Fig. 9.2. Therefore, a grey wolf can be placed in each random
position around the prey that is calculated by using Eqs. (9.1) and (9.2). Following
the same way, in an n-dimensional decision space grey wolves can move to any
nodes of a hypercube around the best solution (position of the prey). They can
distinguish the position of the prey from others and encircle it. Usually, hunting
operation is guided by a, and b and d provide support for a. In a decision space of
an optimization problem we do not have any idea about the optimum solution.
9 Grey Wolf Optimization (GWO) Algorithm 85
Thus, in order to simulate the hunting behavior of grey wolves, we assume that a
(best candidate for the solution), b, and d have more knowledge about the potential
position of the prey. Therefore, the algorithm saves three best solutions achieved so
far and forces others (i.e., omega wolves) to update their positions to achieve the
best place in the decision space. In the optimization algorithm, such a hunting
behavior can be modeled by Mirjalili et al. (2014):
! ! ! ! ! ! ! ! ! ! ! !
D a C 1 : X a X ; D b C 2 : X b X ; D d C 3 : X d X 9:5
! ! ! ! ! ! ! ! !
X 1 X a A1 : D a ; X 2 X b A2 : D b ; X 3 X d A2 : D d 9:6
! ! !
X 1 X a A1 : D a 9:7
Figure 9.3 shows how the search agent updates the positions of a, b, and d in a
2D search space. As shown in Fig. 9.3, the nal position (solution) is inside a circle
that is specied based on the positions of a, b, and d in the decision space. In other
words, a, b, and d estimate the positions of prey and other wolves and then update
their new positions, randomly around the prey.
As aforementioned, grey wolves nish the hunting process by attacking the prey
until it stops moving. In order to model the attacking process, the value of !
a can be
86 H. Rezaei et al.
!
decreased in different iterations. Note that as !
a decreases the fluctuation rate of A
!
decreases too. In other words, A is a random value in the range of 2a; 2a where
!
a decreases from 2 to 0 over iterations. When the random value of A is being in the
range of 1; 1. The next position of a wolf can be between the current position
and the prey position. As illustrated in Fig. 9.4, when j Aj\1 grey wolves will
attack the prey.
9 Grey Wolf Optimization (GWO) Algorithm 87
By using the operators provided so far, the GWO algorithm lets the search agent
to update its position based on the positions of a, b, and d(move toward the prey). It
is true that the encircling process provided as an operator in the GWO algorithm
limits the solutions around local optima, but GWO also has many other operators to
discover new solutions.
Grey wolves often search for the prey according to the positions of a, b, and d.
They diverge from each other to explore the position of prey and then converge to
!
attack the prey. In order to mathematically model the divergence of grey wolves, A
!
can be utilized. A is a random vector that is greater than 1 or less than 1 to force
the search agent to diverge from the prey,
which emphasizes the global search in
!
GWO. Figure 9.4 illustrates that when A [ 1, the grey wolf is forced to move
away from the prey (local optimum) to search for better solutions in the decision
space. !
The GWO algorithm has another component C that assists the algorithm to
!
discover new solutions. As shown in Eq. (9.4), the elements of vector C are within
the range of 0; 2. This component provides random weights for the prey to ran-
domly emphasize C [ 1 or deemphasize C\1 the impact of the prey in
dening the distance in Eq. (9.1). This component helps the GWO algorithm to
behave more randomly and in favor of exploration, and keep the search agent away
from local optima during the optimization process. Note that unlike A, C decreases
nonlinearly. C is required in the GWO algorithm because not only in the initial
iteration but also in the nal iteration, it provides a global search in the decision
space. This component is very useful in avoidance of local optima, especially in the
nal iteration. The C vector can be used as a hedge of approaching the prey in
nature. Generally, the hedge can be seen in a nature hunting process of grey wolves.
This hunting technique prevents grey wolves from quickly approaching the prey
(this is truly what C does in the optimization process of the GWO algorithm).
Table 9.1 presents the characteristics of the GWO algorithm.
The optimization process of GWO starts with creating random population of grey
wolves (candidate solutions). Over the iterations, a, b, and d wolves estimate the
probable position of the prey (optimum solution). Grey wolves update their posi-
tions based on their distances from the prey. In order to emphasize exploration and
exploitation
during the search process, parameter a should decrease from 2 to 0. If
!
A [ 1, the candidate solutions diverge from the prey; and if j Aj\1, the candidate
solutions converge to the prey. This process continues and the GWO algorithm is
terminated if the stopping criteria are satised. To understand how the GWO
algorithm solves optimization problems theatrically, some notes can be summarized
as follows:
The concept of social hierarchy in the GWO algorithm helps grade the solutions
and save the best solutions up to the current iteration.
The encircling mechanism denes a 2D circle-shaped neighbor and the solution
(in higher dimensions, the 2D circle can be extended to a 3D hyper-sphere).
The random parameters (A and C) help grey wolves (candidate solutions) to
dene different hyper-spheres with random radii.
The hunting approach implemented in the GWO algorithm allows grey wolves
(candidate solutions) to locate the probable position of the prey (optimum
solution).
The adaptive values of parameters A and a guarantee exploration and
exploitation in the GWO algorithm and also allow it to easily transfer between
exploration and exploitation.
By
!decreasing
the values of A, a half of iterations are assigned to exploration
A [ 1 and the other half of iterations are assigned to exploitation j Aj\1.
a and C are two main parameters of the GWO algorithm.
Figure 9.5 shows the flowchart of the GWO algorithm with details on the
optimization process.
9 Grey Wolf Optimization (GWO) Algorithm 89
Begin
Initialize a, A, and C
Calculate the fitness values of search agents and grade them. (X= the best solution in
the search agent, X= the second best solution in the search agent, and X= the third
t= 0
End for
Update a, A, and C
Calculate the fitness values of all search agents and grade them
t= t+1
End while
End
9.6 Conclusions
This chapter described the grey wolf optimization (GWO) algorithm as one of the
new meta-heuristic algorithms. The GWO algorithm was inspired by the lift style of
the pack of grey wolves (social hierarchy and hunting mechanism). Also, this
chapter presented a brief literature review of GWO, described the natural process of
grey wolves life style and the mathematical equations of GWO, and nally pre-
sented a pseudocode of GWO.
9 Grey Wolf Optimization (GWO) Algorithm 91
References
Gholizadeh, S. (2015). Optimal design of double layer grids considering nonlinear behaviour by
sequential grey wolf algorithm. Journal of Optimization in Civil Engineering, 5(4), 511523.
Mech, L. D. (1999). Alpha status, dominance, and division of labor in wolf packs. Canadian
Journal of Zoology, 77(8), 11961203.
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering
Software, 69(2014), 4661.
Mirjalili, S. (2015). How effective is the grey wolf optimizer in training multi-layer perceptron.
Applied Intelligence, 43(1), 150161.
Mirjalili, S. M., & Mirjalili, S. Z. (2015). Full optimizer for designing photonic crystal
waveguides: IMoMIR framework. IEEE Photonics Technology Letters, 27(16), 17761779.
Mirjalili, S. M., Mirjalili, S., & Mirjalili, S. Z. (2015). How to design photonic crystal LEDs with
articial intelligence techniques. Electronics Letters, 51(18), 14371439.
Muro, C., Escobedo, R., Spector, L., & Coppinger, R. (2011). Wolf-pack (Canis Lupus) hunting
strategies emerge from simple rules in computational simulations. Behavioral Processes, 88(3),
192197.
Naderizadeh, M., & Baygi, S. J. M. (2015). Statcom with grey wolf optimizer algorithm based pi
controller for a grid Connected wind energy system. International Research Journal of Applied
and Basic Sciences, 9(8), 1421.
Noshadi, A., Shi, J., Lee, W. S., Shi, P., & Kalam, A. (2015). Optimal PID-type fuzzy logic
controller for a multi-input multi-output active magnetic bearing system. Neural Computing
and Applications, 27(7), 116.
Saremi, S., Mirjalili, S. Z., & Mirjalili, S. M. (2015). Evolutionary population dynamics and grey
wolf optimizer. Neural Computing and Applications, 26(5), 12571263.
Sulaiman, M. H., Mustaffa, Z., Mohamed, M. R., & Aliman, O. (2015). Using the grey wolf
optimizer for solving optimal reactive power dispatch problem. Applied Soft Computing, 32
(2015), 286292.
Wong, L. I., Sulaiman, M. H., & Mohamed, M. R. (2015). Solving economic dispatch problems
with practical constraints utilizing grey wolf optimizer. Applied Mechanics and Materials, 785
(2015), 511515. Trans Tech Publications.
Yusof, Y., & Mustaffa, Z. (2015). Time series forecasting of energy commodity using grey wolf
optimizer. In Proceedings of the international multiconference of engineers and computer
scientists (IMECS 2015), Hong Kong, 1820 March.
Chapter 10
Shark Smell Optimization (SSO)
Algorithm
Abstract In this chapter, the shark smell optimization (SSO) algorithm is pre-
sented, which is inspired by the sharks ability to hunt based on its strong smell
sense. In Sect. 10.1, an overview of the implementations of SSO is presented. The
underlying idea of the algorithm is discussed in Sect. 10.2. The mathematical
formulation and a pseudo-code are presented in Sects. 10.3 and 10.4, respectively.
Section 10.5 is devoted to conclusion.
10.1 Introduction
Generally, all animals have abilities that ensure their survival in the nature. Some
species have special abilities which distinguish them from others (Costa and
Sinervo 2004). Finding the prey and the movement of hunter toward the prey are
two important factors in the hunting process. Animals that are able to nd the prey
in a short time with a correct movement, are a successful hunter. Shark is one of the
most well-known and superior hunter in the nature. The reason of this superiority is
the sharks ability to nd the prey in a short time based on its strong smell sense in a
large search space.
Based on this sharks ability, Abedinia et al. (2014) developed a meta-heuristic
algorithm named shark smell optimization (SSO), and evaluated its efciency based
The olfactory system in each animal is the primary sensory system which responds
to the chemical signal from a remote source. In shes, the smell receptors are
located in the olfactory pits which are positioned on the sides of their heads. Each
pit has two outside openings through which water flows in and out. The mechanism
of water movement inside the pit is formed through the wave motion of tiny hair on
the cells lining the pit and the force caused by the movement of sh in water.
Dissolved chemicals connect to a pleated surface in the olfactory nerve endings
(Abedinia et al. 2014). In vertebrates, unlike other sensory nerves, the olfactory
receptors are directly connected to their brains without any nerve intermediaries.
10 Shark Smell Optimization (SSO) Algorithm 95
Fig. 10.1 Schematic of sharks movement toward the source of the smell
The smell impulses are received by the portion in the front of the brain called
olfactory bulb. Fishes have two olfactory bulbs, each of which is located in an
olfactory pit. The allocation of the larger surface of olfactory pits to smell nerves
and larger smell information centers in the brain make the shes smell sense
stronger (Magnuson 1979). Eels and sharks have the largest olfactory bulbs for
smell information processing. About 400 million years ago, the rst sharks
appeared in the oceans as the superior hunters in the nature. One of the reasons of
shark survival in the nature is its ability to capture the prey with its strong smell
sense.
The sharks smell sense is one of its most effective senses. When a shark swims
in water, the water flows through its nostrils which are located along the sides of its
snout. After the water enters the olfactory pits, it flows through the folds of skin
which is covered with sensory cells. Some sharks have this ability to detect the
slightest trace of blood due to the sensory cells (Sfakiotakis et al. 1999). For
example, a shark can detect one drop of blood in a large swimming pool.
Accordingly, the shark can smell an injured sh from up to one kilometer away
(Abedinia et al. 2014).
The sharks smell sense can consider as a guide for it. The smell that comes from
the left side of the shark passes the left pit before entering the right pit. This process
helps shark to nd the source of the smell (Wu and Yao-Tsu 1971). The schematic
of sharks movement toward the source of the smell is shown in Fig. 10.1.
In this movement, concentration plays an important role to guide the shark to its
prey. In other words, a higher concentration results in a true movement of the shark.
This characteristic is the base of development of an optimization algorithm to nd
the optimal solution of a problem.
96 S. Mohammad-Azari et al.
The search process begins when the shark smells odor. In fact, the particles of odor
have a weak diffusion from an injured sh (prey). In order to model this process, a
population of initial solutions are randomly generated for an optimization problem
in the feasible search space. Each of these solutions represents a particle of odor
which shows a possible position of the shark at the beginning of the search process.
1 1
x1 ; x2 ; . . .; x1NP ; 10:1
where x1i = ith initial position of the population vector or ith initial solution; and
NP = population size. The related optimization problem can be expressed as:
h i
x1i x1i; 1 ; x1i; 2 ; . . .; x1i; ND i 1; 2; . . .; NP, 10:2
where x1i; j = jth dimension of the sharks ith position or jth decision variable of ith
position of the shark x1i ; and ND = number of decision variables in the opti-
mization problem.
The odor intensity at each position reflects its closeness to the prey. This process
is modeled in the SSO algorithm through an objective function. Assuming a
maximization problem and considering the general principle, a higher value of the
10 Shark Smell Optimization (SSO) Algorithm 97
objective function represents stronger odor (or more odor particles). Consequently,
this process represents a closer position of the shark to its prey. The SSO algorithm
initiates according to this view (Abedinia et al. 2014).
The shark at each position moves with a velocity to become closer to the prey.
Based on the position vectors, the initial velocity vector can be expressed as:
V11 ; V21 ; . . .; VNP
1
10:3
The shark follows the odor and the direction of its movement is determined
based on the intensity of odor. The velocity of the shark is increased due to the
increased concentration of odor. From the optimization point of view, this move-
ment is modeled mathematically by the gradient of the objective function. The
gradient indicates the direction in which the function increases with the highest rate.
Equation (10.5) shows this process (Abedinia et al. 2014):
Due to the existence of inertia, acceleration of the shark is limited and its
velocity depends on its previous velocity. This process is modeled by a modied
Eq. (10.6) as follows:
@ OF
Vi;jk gk :R1: ak :R2:Vi;jk1
@xj xk 10:7
i;j
i 1; . . .; NP j 1; . . .; ND k 1; . . .; kmax ;
where ak = rate of momentum or inertia coefcient that has a value in the interval of
[0,1] and becomes a constant for stage k; and R2 = random number generator with a
uniform distribution on the interval [0,1], which is intended for the momentum
term. A larger value of ak indicates higher inertia and hence the current velocity is
more dependent on the previous velocity. From the mathematical point of view, the
application of momentum leads to smoother search paths in the solution space. R2
increases
the diversity of the search in the algorithm. For the velocity in the rst
stage Vi;j1 , it is possible to neglect the initial velocity of the shark before starting
the search process Vi;j0 or allocate a very small value to it.
The velocity of the shark can be increased up to a specied limit. Unlike most
shes, sharks do not have swim bladders to help them stay afloat. So, they cannot
be static and must swim upward in a direction even with a low velocity. This
process occurs using the strong tail n which acts as a propulsion (Wu and Yao-Tsu
1971). The normal velocity of a shark is about 20 km/h which is increased up to
80 km/h when the shark tends to attack. The ratio of the highest to lowest velocities
of the sharks is limited (For example, 8020 4). The velocity limiter used for each
stage of the SSO algorithm can be expressed as (Abedinia et al. 2014):
" #
@ OF
k k1 k1
Vi;j min gk :R1: ak :R2:Vi;j ; bk :Vi;j ;
@xj xk 10:8
i;j
where bk = velocity limiter ratio for stage k. The value of Vi;jk is calculated by
Eq. (10.8) and it has the same sign as the term selected by the minimum operator in
Eq. (10.8). Due to forward movement of the shark, its new position Yik 1 is
determined based on its previous position and velocity:
where Dtk = time interval of stage k. Dtk is assumed one for all stages for the
purpose of simplicity. Each component of Vi;jk j 1; . . .; ND of vector Vik is
obtained by Eq. (10.8).
10 Shark Smell Optimization (SSO) Algorithm 99
In addition to forward movement, the shark also has a rotational movement in its
direction to nd stronger odor particles. In fact, this improves its progress (Yao-Tsu
1971). The simulation of this movement is shown in Fig. 10.2.
As shown in Fig. 10.2, the rotational movement of the shark takes place along a
closed contour which is not necessarily a circle. From the optimization view, in
order to nd better solutions the shark does a local search in each stage. This local
search in the SSO algorithm is modeled by Abedinia et al. (2014):
Start
No
k = kmax ?
Yes
End
In Eq. (10.11), the objective function (OF) must be maximized. In other words,
among Yik 1 obtained from forward movement and Zik 1; m (m = 1, 2, ,
M) obtained from the rotational movement, a solution with the highest objective
function is selected as the next position of the shark (xki 1 ). The cycle of forward
and rotational movement will continue until k is equal to kmax .
Like other meta-heuristic optimization methods, the SSO algorithm also has a
number of parameters that must be dened by users, including NP, kmax , g, a, and b
in each stage. Changing these parameters during the SSO evolution based on an
adaptive mechanism is an effective method in the applications. For example, such a
mechanism may start adaptively from larger values of g and b and a smaller value
of a, and then the values of g and b will be decreased and the value of a will be
increased. So, in the initial stage of the evolution process, the algorithm will
continue with large steps in order to enhance the search ability and for the last stage
(when the algorithm approaches the optimal solution) the steps will be smaller to
increase the resolution of search around the optimal solution. After setting the
parameters, the population and the stage counter of the SSO are initialized.
Population will be evolved by the operators of forward and rotational move-
ments. Finally, the best solution in the nal stage is selected for the optimization
problem. The search operators in the SSO algorithm including the gradient-based
forward movement and local search based rotational movement are not used in any
other meta-heuristic algorithms (Abedinia et al. 2014).
The flowchart of the SSO algorithm is shown in Fig. 10.3, and the parameters
and variables used in the SSO algorithm are listed in Table 10.1.
102 S. Mohammad-Azari et al.
Begin
Step 1. Initialization
For k = 1 : k max
End for k
Set k = k+1
Select the best position of shark in the last stage which has the highest OF value
End
10.5 Conclusion
In this chapter, the shark smell optimization (SSO) algorithm was introduced as one
of the new meta-heuristic optimization methods. This algorithm was developed
based on the hunting ability of sharks to use their smell sense. This is a stochastic
search optimization algorithm which initiates with a sets of random solutions and
10 Shark Smell Optimization (SSO) Algorithm 103
continues the search to nd the optimal solution. In fact, this algorithm applies a
gradient-based forward movement and a local search based rotational movement
during the optimization process. In this chapter, the SSO algorithm was introduced
and its mathematical formulation was presented.
References
Abedinia, O., & Amjadi, N. (2015). Short-Term wind power prediction based on hybrid neural
network and chaotic shark smell optimization. International Journal of Precision Engineering
and Manufacturing-Green Technology, 2(3), 245254.
Abedinia, O., Amjady, N., & Ghasemi, A. (2014). A new metaheuristic algorithm based on shark
smell optimization. Complexity. doi:10.1002/cplx.21634
Costa, D. P., & Sinervo, B. (2004). Field physiology: physiological insights from animals in
nature. Annual Review of Physiology, 66, 209238.
Ehteram, M., Karimi, H., Musavi, S. F., & EL-Shae, A. (2017). Optimizing dam and reservoirs
operation based model utilizing shark algorithm approach. Knowledge-Based Systems
(In Press). doi:10.1016/j.knosys.2017.01.026
Ghaffari, S., Aghajani, Gh, Noruzi, A., & Hedayati-Mehr, H. (2016). Optimal economic load
dispatch based on wind energy and risk constrains through an intelligent algorithm.
Complexity, 21(S2), 494506.
Gnanasekaran, N., Chandramohan, S., Sathish Kumar, P., & Mohamed Imran, A. (2016). Optimal
placement of capacitors in radial distribution system using shark smell optimization algorithm.
Ain Shams Engineering Journal, 7, 907916.
Magnuson, J. J. (1979). 4 Locomotion by Scombrid shes: hydromechanics, morphology and
behavior. Fish Physiology, 7, 239313.
Sfakiotakis, M., Lane, D. M., & Davies, J. B. C. (1999). Review of sh swimming modes for
aquatic locomotion. IEEE Journal of Oceanic Engineering, 24, 237252.
Wu, T. Yao-Tsu. (1971). Hydromechanics of swimming propulsion. Part 1. Swimming of a
two-dimensional flexible plate at variable forward speeds in an inviscid fluid. Journal of Fluid
Mechanics, 46(2), 337355.
Chapter 11
Ant Lion Optimizer (ALO) Algorithm
Abstract This chapter introduces the ant lion optimizer (ALO), which mimics the
hunting behavior of antlions in the larvae stage. Specically, this chapter includes
literature review, details of the ALO algorithm, and a pseudo-code for its
implementation.
11.1 Introduction
Mirjalili (2015) introduced the ant lion optimizer (ALO) algorithm and proved its
capability by solving 19 different mathematical benchmark problems and three
classical engineering problems including three-bar truss design, cantilever beam
design, and gear train design. In addition, ALO was used for optimizing the shape
of two ship propeller as a challenging constrained problem with a diverse search
space, which showed the ability of the ALO algorithm for solving real complex
problems. Yamany et al. (2015) used ALO for determining weights and biases in a
training process of multilayer perceptron (MLP) for having a minimum error and an
appropriate classication rate. In the research, the performance of ALO was
compared with those of genetic algorithm (GA), particle swarm optimization
(PSO) algorithm, and ant colony optimization (ACO) algorithm to show its capa-
bility. Zawbaa et al. (2015) applied ALO to an optimization problem of feature
doodlebugs, which have a predatory habit. Adult antlions, which are less well
known, can fly and maybe are mistakenly identied as dragonflies or damselflies.
The name of antlions best describes their unique hunting behavior and their
favorite prey which is ants. The larvae of some antlions species dig cone-shaped
pits with different sizes and wait at the bottom of the pits for ants or other insects to
slip on the loose sands and fall in, as shown in Fig. 11.1a.
When an insect is in a trap, the antlion will try to catch it while the trapped insect
will try to escape. The antlion intelligently tries to slide the prey into the bottom of
the pit by throwing sands toward the edge of the pit. After catching the prey, the
antlion pulls it under the soil and consumes it (Fig. 11.1b). After feeding is com-
pleted, the antlion flicks the leftovers of the prey out of the pit and prepares the pit
for next hunting.
It should be noted that the size of the antlions trap depends on the level of antlion
hunger and the shape of the moon. Antlions dig larger pits when they become hungry
and also when the moon is full (Goodenough 2009). For larger pits, the chance of
successful hunting increases. The ALO algorithm is inspired by this intelligent
hunting behavior of antlions and the interaction with their favorite prey, ants. So the
main steps of antlions hunting are mathematically modeled in the ALO algorithm.
In the ALO algorithm, ants are search agents and move over the decision space,
and antlions are allowed to hunt them and become tter. In each iteration, the
position of each ant is updated with respect to the selected antlion based on roulette
wheel and elite (best antlion obtained so far). By the roulette wheel selection
operator, solutions with the better tness function have more chance to be selected
as the antlion with a larger trap has more chance to hunt more ants. Table 11.1 lists
the characteristics of the ALO algorithm.
In the ALO algorithm, the rst positions of antlions and ants are initialized ran-
domly and their tness functions are calculated. Then, the elite antlion is determined.
In each iteration for each ant, one antlion is selected by the roulette wheel operator and
its position is updated with the aid of two random walk around the roulette selected
antlion and elite. The new positions of ants are evaluated by calculating their tness
functions and comparing with those of antlions. If an ant becomes tter than its
corresponding antlion, its position is considered as a new position for the antlion in the
108 M. Mani et al.
Fig. 11.2 Flowchart of the ALO algorithm (It = iteration counter; and IT = number of iterations)
next iteration. Also, the elite will be updated if the best antlion achieved in the current
iteration becomes tter than the elite. These steps are repeated until the end of itera-
tions. The flowchart of the ALO algorithm is illustrated in Fig. 11.2.
11 Ant Lion Optimizer (ALO) Algorithm 109
In the ALO algorithm, there are two populations, ants and antlions. As afore-
mentioned, ants are the search agents in the decision space and antlions which hide
somewhere in the decision space, can hunt them, and catch their positions to
become tter. In an optimization problem with N decision variables (N-dimensional
optimization problem), the ants/antlions positions in an N-dimensional space are
the decision variables. So each dimension of the ant/antlions position belongs to
one of the decision variables which can be expressed as:
where Pant = matrix of ants positions; Am,n = nth decision variable of the mth ant;
and M = number of ants.
8 9
>
> Al1;1 Al1;2 . . . Al1;n ... Al1;N >
>
>
> Al2;1 Al2;2 . . . ... >
>
>
> Al2;n Al1;N >
>
>
> >
>
< ...
> ..
.
..
.
..
.
..
.
..
.
>
=
Pantlion . .. ; 11:4
>
>
> Alm;1 Alm;2 .. Alm;n . Alm;N >
>
>
>
> .. .. .. .. .. .. >
>
>
> >
>
>
> . . . . . . >
>
: ;
AlM;1 AlM;2 . . . AlM;n . . . AlM;N
where Pantlion = matrix of antlions positions; and Alm,n = nth decision variable of
the mth antlion.
For evaluating the ants and antlions, a tness function is utilized and their tness
values are calculated during optimization and saved in the following matrices. In
this process, the best antlion (antlion with the best tness) is selected as elite.
110 M. Mani et al.
8 9
>
> f A1;1 A1;2 ... A1;n ... A1;N >
>
>
> f A2;1 ... ... A1;N >
>
>
> A2;2 A2;n >
>
>
> >
>
< ...
> ..
.
..
.
..
.
..
.
..
.
>
=
Fant .. .. ; 11:5
>
> f Am;1 Am;2 . Am;n . Am;N >
>
>
> >
>
>
> .. .. .. .. .. .. >
>
>
> . . . . . . >
>
>
: >
;
f AM;1 AM;2 . . . AM;n . . . AM;N
where Fant = matrix of the ants tness functions; and f = tness function.
8 9
>
> f Al1;1 Al1;2 ... Al1;n ... Al1;N >
>
> f Al
> ... ... Al1;N >
>
>
> 2;1 Al2;2 Al2;n >
>
>
> . .. .. .. .. .. >
>
>
< .. >
=
. . . . .
Fantlion .. .. ; 11:6
>
> f Alm;1 Alm;2 . Alm;n . Alm;N >
>
>
> >
>
>
> .. .. .. .. .. .. >
>
>
> . . . . . . >
>
>
: >
;
f AlM;1 AlM;2 . . . AlM;n . . . AlM;N
In this step, by using the roulette wheel operator for each ant, an antlion is selected.
It should be noted that in the ALO algorithm each ant can fall into only one antlion
trap in each iteration. The roulette wheel selected antlion for each ant is the one that
has trapped the ant. By using the roulette wheel operator, the solution with a better
tness function has more chance to be selected, as an antlion with a larger trap can
hunt more ants.
When an ant falls into the trap, the antlion starts shooting sand outward the center of
the pit for sliding down the ant which is trying to escape. This behavior is math-
ematically modeled by shrinking the radius of the ants random walk. So the range
of boundary for all decision variables is decreased and updated, as expressed in
Eqs. (11.7) and (11.8) (Mirjalili 2015).
11 Ant Lion Optimizer (ALO) Algorithm 111
cIt
cIt 11:7
R
dIt
dIt ; 11:8
R
where c(It) = modied vector including the minimum of all decision variables at
the Itth iteration; c(It) = vector including the minimum of all decision variables at
the Itth iteration; R = a ratio given by Eq. (11.9); d(It) = modied vector including
the maximum of all decision variables at the Itth iteration; and d(It) = vector
including the maximum of all decision variables at the Itth iteration (Mirjalili 2015).
It
R 10w ; 11:9
IT
When the iteration number in Eq. (11.10) increases, the radius of random walk
decreases, which guarantees convergence of the ALO algorithm.
Antlion traps affect the random walk of ants. In order to mathematically model this
behavior, the boundary of ant random walk is adjusted in each iteration so that the
ant moves in a hyper-sphere around the selected antlion trap. The lower and upper
bounds of the ant random walk for each dimension in each iteration can be cal-
culated by the following equations (Mirjalili 2015):
where cm It = vector including the minimum of all decision variables for the mth
ant in the Itth iteration; Antlionl It = position of the selected lth antlion in the Itth
iteration; and dm It = vector including the minimum of all decision variables for
the mth ant. Equations (11.11) and (11.12) show that ants random walk is in the
hyper-sphere, which is dened by vectors c and d around the roulette wheel selected
antlion.
112 M. Mani et al.
As ants move randomly in the nature for food, random walk is used for modeling
their movement, which can be expressed as the following equation (Mirjalili 2015):
where rand = random value generated with a uniform distribution between 0 and 1.
Figure 11.3 shows three random walks in 100 iterations. From Fig. 11.3,
completely different behaviors of random walk can be observed.
To keep the random walks within the decision space, they are normalized by the
min-max normalization method (Mirjalili 2015):
Xn It an dn It cn It
Zn It ; 11:15
bn an
where Zn(It) = normalized random walk position of the nth decision variable at the
Itth iteration; Xn(It) = random walk position of the nth decision variable at the Itth
iteration before normalization; an = minimum of random walks for the nth decision
variable; bn = maximum of random walks for the nth decision variable;
cn(It) = minimum of the nth decision variable at the Itth iteration; and
dn(It) = maximum of the nth decision variable at the Itth iteration.
Fig. 11.3 Three random walk curves in one dimension started at zero
11 Ant Lion Optimizer (ALO) Algorithm 113
11.2.6 Elitism
RItl RIte
AntItm ; 11:16
2
where AntItm = position of the selected mth antlion in the Itth iteration; RItl = random
walk around the lth roulette wheel selected antlion at the Itth iteration; and
RIte = random walk around the elite antlion at the Itth iteration.
At the nal step of antlion hunting, an ant falls into the bottom of the trap and is
caught by the antlions jaw. The antlion pulls the ant into sand and consumes it. In
the ALO algorithm, catching the prey occurs when an ants tness function
becomes better than its corresponding antlion. In this situation, the antlion changes
its position to the position of the hunted ant. This process can be mathematically
expressed as:
AntlionItl AntItm If f AntItm [ f AntlionItl : 11:17
In the evolutionary algorithms, at the end of each iteration the termination criterion
is applied to decide if the algorithm is to be stopped or continues the next iteration.
A good termination criterion should guarantee convergence of the algorithm. To
meet this purpose in the ALO algorithm by increasing the iteration number, the
radius of random walk decreases and the maximum number of iterations is con-
sidered as a termination criterion.
In the ALO algorithm, only the number of search agents and the number of iter-
ations are user-dened parameters. So, one of the main advantages of the ALO
algorithm is that it has very few parameters to be adjusted. Generally, the
114 M. Mani et al.
Begin
While the iteration number is smaller than the maximum iteration number
Update the lower and upper bounds by using Equations (11.7) and (11.8)
Generate two random walks around the roulette selected antlion and the elite
and normalize them by Equation (11.15)
End for
Calculate the fitness function values for all ants and replace them in the
corresponding matrix
Replace an antlion with its corresponding ant if it becomes fitter [Equation (11.17)]
Update the elite value if an antlions fitness function value becomes better than the
elite
End while
End
11 Ant Lion Optimizer (ALO) Algorithm 115
11.6 Conclusion
This chapter introduced the antlion optimizer (ALO), which is inspired by the
hunting behavior of antlions. In this chapter, after a brief literature review of the
ALO algorithm and an explanation of the hunting behavior of antlions, the algo-
rithmic fundamentals of the ALO algorithm are detailed and a pseudo-code is also
presented.
References
Ali, E. S., Elazim, S. A., & Abdelaziz, A. Y. (2016). Optimal allocation and sizing of renewable
distributed generation using Antlion optimization algorithm. Electrical Engineering. doi:10.
1007/s00202-016-0477-z
Dubey, H. M., Pandit, M., & Panigrahi, B. K. (2016). Antlion optimization for short-term wind
integrated hydrothermal power generation scheduling. International Journal of Electrical
Power & Energy Systems, 83(1), 158174.
Goodenough, J., McGuire, B., & Jakob, E. (2009). Perspectives on animal behavior (3rd ed.).
New York, USA: John Wiley and Sons.
Kamboj, V. K., Bhadoria, A., & Bath, S. K. (2016). Solution of non-convex economic load
dispatch problem for small-scale power systems using Antlion optimizer. Neural Computing
and Applications, 25(5), 112.
Kaur, M. & Mahajan, A. (2016). Community Detection in Complex Networks: A Novel Approach
Based on Antlion Optimizer. Sixth International Conference on Soft Computing for Problem
Solving, Punjab, India. December 2324.
Kaushal, K., & Singh, S. (2017). Allocation of stocks in a portfolio using Antlion algorithm:
Investors perspective. IUP Journal of Applied Economics, 6(1), 3449.
Mirjalili, S. (2015). The Ant lion optimizer. Advances in Engineering Software, 83(1), 8098.
Mirjalili, S., Jangir, P., & Saremi, S. (2017). Multi-objective Antlion optimizer: a multi-objective
optimization algorithm for solving engineering problems. Applied Intelligence, 46(1), 7995.
Petrovi, M., Petronijevi, J., Miti, M., Vukovi, N., Miljkovi, Z., & Babi, B. (2016). The
Antlion optimization algorithm for integrated process planning and scheduling. Applied
Mechanics and Materials, 834(1), 187192.
Rajan, A., Jeevan, K., & Malakar, T. (2017). Weighted elitism based Antlion optimizer to solve
optimum VAr planning problem. Applied Soft Computing, 55(1), 352370.
Raju, M., Saikia, L. C., & Sinha, N. (2016). Automatic generation control of a multi-area system
using Antlion optimizer algorithm based PID plus second order derivative controller.
International Journal of Electrical Power & Energy Systems, 80(1), 5263.
Saxena, P., & Kothari, A. (2016). Antlion optimization algorithm to control side lobe level and
null depths in linear antenna arrays. AEU-International Journal of Electronics and
Communications, 70(9), 13391349.
Talatahari, S. (2016). Optimum design of skeletal structures using Antlion optimizer. International
Journal of Optimization in Civil Engineering, 6(1), 1325.
Yamany, W. et al. (2015, September 2022). A new multi-layer perceptrons trainer based on
Antlion optimization algorithm. Fourth International Conference on Information Science and
Industrial Applications, Beijing, China.
116 M. Mani et al.
Yao, P., & Wang, H. (2016). Dynamic adaptive Antlion optimizer applied to route planning for
unmanned aerial vehicle. Soft Computing. doi:10.1007/s00500-016-2138-6
Zawbaa, H. M., Emary, E., & Grosan, C. (2016). Feature selection via chaotic antlion
optimization. PLoS ONE, 11(3), e0150652.
Zawbaa, H. M., Emary, E., & Parv, B. (2015, November 2325). Feature selection based on
antlion optimization algorithm. Third World Conference on Complex Systems, Marrakech,
Morocco.
Chapter 12
Gradient Evolution (GE) Algorithm
12.1 Introduction
method as the main updating rule. The GE algorithm explored the search space of an
optimization problem using a set of vectors. Kuo and Zulvia also considered a set of
operators in order to enhance the ability of their model in nding the optimal solution.
They further evaluated the performance of the GE algorithm by using 15 benchmark
test functions in three stages. In the rst stage, the effects of changing parameters on
the obtained results of the GE algorithm were investigated and the best parameter
setting was determined. Then, the results of the GE algorithm were compared with
those from other meta-heuristic algorithms including particle swarm optimization
(PSO), differential evolution (DE), continuous genetic algorithm (GA), and articial
bee colony (ABC). The results indicated the better performance of the GE algorithm
than those of the other meta-heuristic algorithms.
Kuo and Zulvia (2016) also proposed a K-means clustering algorithm based on
the GE algorithm in order to derive the hidden stored information in data sets. The
reason of development of this new algorithm was the dependency of the K-means
algorithm on the initial centroids of clusters. In the proposed algorithm, the GE
algorithm was utilized to nd a good center for the K-means algorithm. The
algorithm was validated by a number of benchmark datasets and the obtained
results were compared with those from other meta-heuristic based K-means algo-
rithms, indicating the superiority of the GE based K-means algorithm over the other
meta-heuristic algorithms. Kuo and Zulvia (2016) nally proposed to consider the
similarities between clusters as well as the similarities within clusters in a
multi-objective structure.
12.2.1 Gradient
Dy f x0 Dx f x0
m 12:1
Dx Dx
f x0 Dx f x0
m Lim
Dx 12:2
x!0
The information about the function contour or gradient can be obtained based on
the correlation between the derivative and the slope of the tangent (Miller 2011). An
optimization is dened as nding the optimal solution which maximizes or mini-
mizes an objective function. If f(x) is not strongly convex or concave, there will be
several stationary points x.
In addition, there is a maximum or minimum point in a flat part, which indicates
that the optimal solution is located at a stationary point. In fact, the optimal solution
of an optimization problem is located at a point with a zero gradient. So, the optimal
solution can be obtained by determination of the function contour. Accordingly, the
derivative, which is based on the limit [Eq. (12.2)], is an important concept in
optimization problems. Although many real-world problems have discrete vari-
ables, the limit concept just applies in continuous functions. Furthermore, for some
continuous functions, the calculation of limit is so difcult because of the com-
plexity of functions. In this case, derivative is calculated using numerical methods
such as Newton interpolation, Lagrange and cubic spine (Patil and Verma 2006).
The GE algorithm was developed using the following Taylor-series-based rst- and
second-order derivatives:
120 M. Abdi-Dehkordi et al.
f x Dx f x Dx
f 0 x 12:3
2 : Dx
f x Dx 2 f x f x Dx
f 00 x 12:4
Dx2
where f 0 x and f 00 x = rst and second order derivatives of f(x), respectively (Kuo
and Zulvia 2015).
Optimization methods can be divided into two main groups: direct search and
gradient-based search methods. Both groups start the search from a point and
evaluate the other points in order to nd an optimal solution till the stopping
criterion is satised. In the direct search methods (region eliminating), the contour
of a function is determined based on two or more points. Then, the search direction
is limited to the search space which has a better initial point. The search space is
decreased iteratively in different successful iterations in order to achieve the optimal
solution. Golden search and Fibonacci search are two direct search methods
(Bazaraa et al. 2013). The rst- and second-order derivatives are applied in the
function analysis using the gradient-based methods such as NewtonRaphson
(Ypma 1995). The concept of this method applied in the GE algorithm is presented
as follows.
In the NewtonRaphson method, the search process starts from an initial point
and continues its movement to the point with a zero gradient. If the search is located
at point xt in iteration t, in next iteration it will be at point xt 1 which is located Dxt
from xt . Since nding an extreme point is considered, the rst derivative must be
equal to zero. The Taylor series expansion is applied in order to estimate the rst-
and second-order derivatives [Eqs. (12.3) and (12.4)]. Furthermore, xt 1 is deter-
mined by:
!
!
et
gt
12:6
kgt k
k
X ! !
! !
gt f ti f !
xt ti xt ; 12:7
i1
iterations T, number of vectors N, size of initial step k, rate of jump Jr, rate of
refreshing Sr, and rate of reduction e.
Determining the number of iterations and vectors depends on the complexity of
the problem. In complex problems, more iterations and more vectors are consid-
ered. Jr, Sr, and e are in the interval [0, 1]. Jr is used when there is considerable
modication in the vector direction. Sr and e also manage vector regeneration and
refreshing. Moreover, the value of e is effective for acceleration of vector
refreshing. It is necessary to determine the initial points for all vectors in the GE
algorithm. In this regard, a simplest method is employed to generate random
numbers.
The updating rule, which controls the vector movement in order to reach a better
position, includes two gradient-based and acceleration factor parts. The rst part is
the core of updating rule and is derived from the gradient-based methods, which are
started from an arbitrary initial point and move gradually to the next point in a
certain direction determined by the gradient.
Movement to the points with better values of objective function in the search
direction is illustrated in Fig. 12.2. The GE algorithm also determines the search
space which has better solutions. Due to the complexity of the problem, the gradient
is not considered as the rst-order derivative of the objective function and the GE
algorithm applies central differencing instead. The search process of the GE algo-
rithm is shown in Fig. 12.3.
As shown in Fig. 12.3a, the GE algorithm explores the search space to achieve a
better area. If all the vectors in population move simultaneously to the same area,
the search space becomes narrow. If there is a distraction direction for vector
movement, the search process of the algorithm is performed in a wider range
(Fig. 12.3b). The gradient-based updating rule applies the NewtonRaphson
equation [Eq. (12.5)]. Since the main updating rule is individual-based, in order to
12 Gradient Evolution (GE) Algorithm 123
(a) (b)
tion tion
direc direc
rch rch
Sea Sea
(a) (b)
algorithm is based on the population-based search method, there are many possible
solutions in population. In fact, in addition to the worst and best vectors, there is a
vector Xit which may have a neighboring with a worse or better position.
To update Xit in the GE algorithm, point Xit DXit is replaced with vector
XiB [XiB 2 Pt ; f XiB f Xit ] and Xit DXit is replaced with XiW
[XiW 2 Pt ; f XiW f Xit ]. So, points Xit DXit and Xit DXit are substituted by
vectors XiB and XiW , respectively. The GE algorithm also applies position Xit instead
of the tness of position f Xit because applying a tness value is time consuming.
In order to expand the search, a random number rg N 0; 1 is added to the
updating rule. Considering rg ensures the distribution of vector movement in the GE
algorithm. The updating rule in Eq. (12.5) can be transformed to GradientMove by:
!
Dxtij ij xij
xW B
Gradient Move rg : : W ; 8j 1; . . .; D 12:9
2 xij xtij xBij
t t
xij xBij xW
ij x ij
Dxtij ; 8j 1; . . .; D 12:10
2
The acceleration factor Acc is used to accelerate the convergence of each vector.
This process uses the best vector of a direction which is expressed by Eq. (12.11).
In the GE algorithm, it is assumed that the best vector is the vector closest to the
optimal solution. So, all other vectors will move in a better direction by considering
the position of the best vector. Similar to the gradient updating rule, the acceleration
factor in Eq. (12.11) is also multiplied by a random number ra N 0; 1, which
ensures different sizes of steps for each vector (Kuo and Zulvia 2015).
Acc ra : yi xtij ; 8j 1; . . .; D; 12:11
where Y fyi jj 1; 2; . . .; Dg, the best vector. Finally, the vector updating is
conducted by (Kuo and Zulvia 2015):
where utij 2 Uit = transition vector which is obtained by updating Xit . Since vector
Xit includes worse and better vectors, an additional process is necessary in order to
determine the worst and best vectors in Pt because the main updating rule includes
the neighboring vectors which have a worse or better tness value. The worst and
best vectors do not have any neighboring vector which is worse or better than
themselves. In this regard, an additional process is required to determine the gra-
dient of the objective function.
12 Gradient Evolution (GE) Algorithm 125
If W t and Bt are the worst and best vectors of Xit , respectively, the values of xW
ij
and xBij can be replaced with wj and bj , respectively by Eqs. (12.13)(12.16) (Kuo
and Zulvia 2015):
where c = size of the initial step which is predened. c can be a static or dynamic
number. It can be decreased by increasing the number of iterations if it is a dynamic
number. There are two ways for solving a maximization problem: (1) switching
worse and better neighbors and (2) transforming into a minimization problem.
An appropriate search method must be able to explore the search space widely and
deeply. The vector updating and vector jumping operators focus on deep and wide
search, respectively. In the GE algorithm, the vector jumping operator is applied to
avoid local optima. This operator just performs on a selective vector and modies
the movement direction. In the GE algorithm, Jr is considered
for determining
whether or not the vector must jump. If rj Jr rj N 0; 1 , vector jumping to a
transition vector Uit is given by (Kuo and Zulvia 2015):
utij utij rm : utij xtkj 8j 1; . . .; D; 12:17
t
Xi
t
t rm X t X k
i Xt
X i k
t X t
t Xk
t rm X i
i X t
k
Xi
The GE algorithm uses an elitist strategy. In iteration t, the transition vector Uit
records the updated results and the jumped vector Xit . The next position of vector i,
Xit 1 , is replaced by Uit when the tness value f Uit is better than tness value
f xti . Otherwise, it remains at Xit in next iteration (i.e., Xit 1 Xit ). Using the elitist
strategy, the GE algorithm ensures that each vector always moves to a better
position. If the determination of a better position is difcult, problem arises. This
situation takes place in the complex problems and the problems with many local
optima. In this case, vector refreshing is performed for such a problematic vector.
The GE algorithm records the position of vector Xit and the history of vector
updating. The history of vector i which is si 2 0; 1, provides information about the
number of iterations that the vector cannot move to a better position. The newly
generated vector si is set to one. When vector i is stuck in the same position, si is
reduced by:
si si e:si ; 12:18
Start
Vector updating U it
Yes
rj Jr ? U it = vector jumping
No
Calculate fitness U it
No X it = X it 1
f ( U it ) f ( X it 1 ) ?
s i = s i .s i
Yes
X it = U it
No
i
No
Is stopping criterion
satisfied?
Yes
Select best vector
End
12.4 Pseudo-Code of GE
12 Gradient Evolution (GE) Algorithm 129
130 M. Abdi-Dehkordi et al.
12.5 Conclusion
References
Bazaraa, M. S., Sherali, H. D., & Shetty, C. M. (2013). Nonlinear programming: Theory and
algorithms (3rd ed.). New Jersey, USA: Wiley.
Kuo, R. J., & Zulvia, F. E. (2015). The gradient evolution algorithm: A new metaheuristic.
Information Sciences, 316, 246265.
Kuo, R. J., & Zulvia, F. E. (2016, July 2529). Cluster analysis using a gradient evolution-based
k-means algorithm. In IEEE congress on evolutionary computation (CEC). Beijing, China:
Peking University.
Larson, R., Hostetler, R. P., & Edwards, B. H. (2007). Essential Calculus: Early Transcendental
Functions (1st ed.). New York, USA: Houghton Mifflin Company.
Miller, H. R. (2011). Optimization: Foundations and applications (1st ed.) New York, USA:
Wiley.
Patil, P. B., & Verma, U. P. (2006). Numerical computational methods (1st ed.). Oxford, UK:
Alpha Science International.
Salomon, R. (1998). Evolutionary algorithms and gradient search: Similarities and differences.
IEEE Transactions on Evolutionary Computation, 2(2), 4555.
Wen, J. Y., Wu, Q. H., Jiang, L., & Cheng, S. J. (2003). Pseudo-gradient based evolutionary
programming. Electronics Letters, 39(7), 631632.
Ypma, T. J. (1995). Historical development of the Newton-Raphson method. SIAM Review, 37(4),
531551.
Chapter 13
Moth-Flame Optimization
(MFO) Algorithm
13.1 Introduction
The parameters for LSSVM were optimally determined using the MFO algorithm.
Raju et al. (2016) used MFO for simultaneous optimization of secondary controller
gains in a cascade controller proposed for automatic generation control of a
two-area hydro-thermal system under a deregulated scenario. Zawbaa et al. (2016)
proposed a feature selection algorithm based on MFO and applied it to machine
learning for feature selection to nd the optimal feature combination using the
wrapper-based feature selection mode. MFO was exploited as a searching method
to nd the optimal feature set, maximizing classication performance. Ceylan
(2016) used MFO to solve the harmonic elimination problem and minimize the total
harmonic distortion. The simulation results showed that the MFO model solved the
harmonic elimination problem and the total harmonic distortion minimization
problem efciently. Lal and Barisal (2016) applied MFO to evaluate the optimal
gains of the fuzzy-based proportional, integral and derivative (PID) controllers in a
microgird power generation system interconnected with a single area reheat thermal
power system. Gope et al. (2016) used MFO to obtain an optimal bidding strategy
of supplier considering double sided bidding under a congested power system.
Jangir et al. (2016) solved ve constrained benchmark functions of engineering
problems using MFO and compared the results with other recognized optimization
algorithms. MFO provided better results in various design problems in comparison
to other optimization algorithms. Parmar et al. (2016) solved the optimal power
flow (OPF) problem using MFO involving fuel cost reduction, active power loss
minimization, and reactive power loss minimization. Comparing with other tech-
niques such as flower pollination algorithm (FPA) and particle swarm optimization
(PSO), MFO showed a better performance. Bentouati et al. (2016) applied MFO to
solve the problem of OPF in the interconnected power system for the Algerian
power system network with different objective functions. The results were com-
pared with those obtained by articial bee colony (ABC) and other metaheuristics.
Allam et al. (2016) utilized MFO for the parameter extraction process of the
three-diode model for the multi-crystalline solar cell\module, with the results being
compared with those obtained by FPA and hybrid evolutionary (DEIM) algorithms.
The results showed that MFO achieved the least root mean square error (RMSE),
mean bias error (MBE), absolute error at the maximum power point (AEMPP), and
best coefcient of determination. Buch et al. (2017) applied MFO to various
nonconvex, nonlinear optimum power flow objective functions with ve single
objective functions. Comparing MFO with other stochastic methods showed that
MFO obtained the optimum value with rapid and smooth convergence. Garg and
Gupta (2017) used MFO to optimize the performance of open shortest path rst
(OSPF) algorithm which is the widely used, efcient algorithm to select the shortest
path between the source and destination. The results for different scenarios showed
the reduction in delay and energy consumption in the optimized OSPF compared to
the traditional OSPF. Khalilpourazari and Pasandideh (2017) solved a
multi-constrained Economic Order Quantity (EOQ) model using MFO and the
interior-point method. To compare the results of the two methods, three measures,
including objective function value, computation time, and the number of function
evaluations were used. The results indicated that there was no signicant difference
13 Moth-Flame Optimization (MFO) Algorithm 133
between the average objective function values of MFO and the interior-point
method, but MFO required signicantly less computation time and fewer function
evaluations.
Different versions of the MFO algorithm have been developed by other
researchers in order to improve the performance of the algorithm in different elds
of research. Bhesdadiya et al. (2016) developed a hybrid optimization algorithm
based on PSO and MFO. PSO was used for the exploitation and MFO was utilized
for the exploration phase. The proposed algorithm was tested on some unconstraint
benchmark test functions along with some constrained/complex design problems
and the obtained results demonstrated its effectiveness comparing to the standard
PSO and MFO algorithms. Nanda (2016) modied the original MFO to handle
multi-objective optimization problems. The proposed MOMFO used concepts such
as the archive grid, coordinate based distance for sorting and non-dominance of
solutions. In the tests of six benchmark mathematical functions, MOMFO achieved
better accuracy and shorter computational time than non-dominated Sorting Genetic
Algorithm-II (NSGA-II) and Multi-objective PSO (MOPSO). Muangkote et al.
(2016) proposed an improved version of MFO for image segmentation to enhance
the optimal multilevel thresholding of satellite images. The proposed multilevel
thresholding moth-flame optimization algorithm (MTMFO) was tested for various
satellite images. MTMFO provided more effective results with better accuracy than
MFO.
Soliman et al. (2016) proposed two modied versions of MFO and used as a
prediction tool for terrorist groups, and compared with the original MFO as well as
ant lion optimizer (ALO), grey wolf optimization (GWO), PSO, and genetic
algorithm (GA). The results proved that the modied versions of MFO achieved an
advance over the original MFO algorithm. Li et al. (2016b) proposed an improved
version of MFO based on the Lvy-flight strategy (LMFO) to improve the con-
vergence and precision of MFO. The new LMFO algorithm increased the diversity
of the population against premature convergence and made the algorithm jump out
of local optimum more effectively. Compared with MFO and other heuristic
methods, LMFO demonstrated its superior performance. Trivedi et al. (2016)
applied MFO to economic load dispatch problems. They integrated MFO with Lvy
flights to achieve the competitive results in case of both discrete and continuous
control parameters.
When encountered articial lights, moths try to maintain a similar angle to the light
source and because of the close distance they get caught in a spiral path (Fig. 13.1).
The MFO assigns moths to different solutions in the solution space of the
optimization problem with each moth having its own tness function value. Each
moth also has a flame which stores the best solution found by that moth. In each
iteration, the moths search the solution space by flying through a spiral path toward
their flames and update their positions.
MFO starts with the positions of moths randomly initialized within the solution
space. The tness values of the moths are calculated which are the best individual
tness values so far. The flame tags the best individual position for each moth. In
the next iteration, the moths positions are updated based on a spiral movement
function toward their best individual positions tagged by a flame and the positions
of the flames are updated with new best individual positions. The MFO algorithm
continues updating the positions of the moths and flames and generating new
positions until the termination criteria are met. Table 13.1 lists the characteristics of
the MFO. Figure 13.2 shows the flowchart of the MFO.
13 Moth-Flame Optimization (MFO) Algorithm 135
Start
Calculate the fitness functions and tag the best positions by flames
No Yes
Are termination criteria satisfied?
End
Two other components of the MFO are the flame matrix representing the flames
in the D-dimensional space and their corresponding tness function vector, which
can be respectively expressed as (Mirjalili 2015):
2 3
F1;1 F1;2 ... ... F1;d
6 F2;1 F2;2 ... ... F2;d 7
6 7
F6
6 : : : : : 7 7 13:3
4 : : : : : 5
Fn;1 Fn;2 ... ... Fn;d
2 3
OF1
6 OF2 7
6 7
OF 6
6 : 7
7 13:4
4 : 5
OFn
In MFO, moths and flames represent solutions, with moths searching the solu-
tion space in each iteration to nd the optimal solution and the flames representing
the best solution found by each moth. In other words, each moth searches the space
around its flame and each time it nds a better solution. The position of the flame is
then updated.
13 Moth-Flame Optimization (MFO) Algorithm 137
MFO uses three functions to initialize the random positions of the moths (I), move
the moths in the solution space (P), and terminate the search operation (T):
MFO I; P; T 13:5
Any random distribution can be used to initialize the moths positions in the
solution space. The implementation of the I function can be written as (Mirjalili
2015):
in which ub and lb = arrays that respectively dene the upper and lower bounds of
variables.
The movement of moths in the solution space is based on the transverse ori-
entation and modeled by using a logarithmic spiral subjected to the following
conditions (Mirjalili 2015):
Spirals initial point should start from the moth
Spirals nal point should be the position of the flame
Fluctuation of the range of spiral should not exceed the search space.
Hence, the P function for the movement is dened as:
The spiral movement of the moth around the flame guarantees the exploration
and exploitation of the solution space. In order to prevent the moths getting trapped
in local optima, the best solutions (flames) are sorted in each iteration and each
moth flies around its corresponding flame based on the OF and OM matrices. In
other words, the rst moth flies around the best obtained solution, while the last
moth flies around the worst obtained solution.
138 M. Bahrami et al.
In order to improve the exploitation of the MFO algorithm, Eq. (13.9) is used to
decrease the number of the flames, and hence the moths only fly around the best
solution in the nal steps of the algorithm (Mirjalili 2015):
Nl
Flame no round N l 13:9
T
Mirjalili (2015) tested MFO on 29 benchmark functions and seven real engineering
problems, and compared the results with those obtained by other well-known
nature-inspired algorithms such as PSO, gravitational search algorithm (GSA), bat
algorithm (BA), FPA, states of matter search (SMS), refly algorithm (FA), and
GA. MFO showed promising and competitive results for the benchmark test
functions and the results of the real problems demonstrated the MFOs ability in
dealing with challenging problems with constrained and unknown search spaces.
13 Moth-Flame Optimization (MFO) Algorithm 139
Begin
= Fitness Function( )
If iteration = 1
= sort( )
= sort( )
Else
= sort( 1, )
= sort( 1, )
End if
For = 1: N
For = 1: D
Update and
End for j
End for i
End While
End
140 M. Bahrami et al.
13.9 Conclusion
References
Allam, D., Yousri, D. A., & Eteiba, M. B. (2016). Parameters extraction of the three diode model
for the multi-crystalline solar cell/module using Moth-Flame Optimization Algorithm. Energy
Conversion and Management, 123, 535548.
Bentouati, B., Chaib, L., & Chettih, S. (2016). Optimal Power Flow using the Moth Flam
Optimizer: A case study of the Algerian power system. Indonesian Journal of Electrical
Engineering and Computer Science, 1(3), 431445.
Bhesdadiya, R. H., Trivedi, I. N., Jangir, P., Kumar, A., Jangir, N., & Totlani, R. (2016, August
1213). A novel hybrid approach particle swarm optimizer with moth flame optimizer
algorithm. In International Conference on Computer, Communication and Computational
Sciences (ICCCCS), Advances in Intelligent Systems and Computing. Ajmer, India.
Buch, H., Trivedi, I. N., & Jangir, P. (2017). Moth flame optimization to solve optimal power flow
with non-parametric statistical evaluation validation. Cogent Engineering, 4(1).
Ceylan, O. (2016, November 35). Harmonic elimination of multilevel inverters by moth-flame
optimization algorithm. In International Symposium on Industrial Electronics (INDEL).
Republic of Srpska, Bosnia and Herzegovina: IEEE.
Frank, K. D. (2006). Effects of articial night lighting on moths. In C. Rich & T. Longcore (Eds.),
Ecological consequences of articial night lighting (pp. 305344). Washington, DC: Island
Press.
Garg, P., & Gupta, A. (2017). Optimized open shortest path rst algorithm based on moth flame
optimization. Indian Journal of Science and Technology, 9(48).
Gope, S., Dawn, S., Goswami, A. K., & Tiwari, P. K. (2016, November 2225). Moth Flame
Optimization based optimal bidding strategy under transmission congestion in deregulated
power market. In Region 10 Conference (TENCON). Marina Bay Sands, Singapore: IEEE.
Jangir, N., Pandya, M. H., Trivedi, I. N., Bhesdadiya, R. H., Jangir, P., & Kumar, A. (2016, March
56). Moth-Flame Optimization algorithm for solving real challenging constrained engineering
optimization problems. In Students Conference on Electrical, Electronics and Computer
Science (SCEECS). Bhopal, India: IEEE.
Khalilpourazari, S., & Pasandideh, S. H. R. (2017). Multi-item EOQ model with nonlinear unit
holding cost and partial backordering: Moth-flame optimization algorithm. Journal of
Industrial and Production Engineering, 34(1), 4251.
Lal, D. K., & Barisal, A. K. (2016, December 2728). Load frequency control of AC microgrid
interconnected thermal power system. In International Conference on Advanced Material
Technologies (ICAMT). Andhra Pradesh, India.
13 Moth-Flame Optimization (MFO) Algorithm 141
Li, C., Li, S., & Liu, Y. (2016a). A least squares support vector machine model optimized by
moth-flame optimization algorithm for annual power load forecasting. Applied Intelligence, 45
(4), 11661178.
Li, Z., Zhou, Y., Zhang, S., & Song, J. (2016b). Lvy-flight moth-flame algorithm for function
optimization and engineering design problems. Mathematical Problems in Engineering.
doi:10.1155/2016/1423930.
Mirjalili, S. (2015). Moth-flame optimization algorithm: A novel nature-inspired heuristic
paradigm. Knowledge-Based Systems, 89, 228249.
Muangkote, N., Sunat, K., & Chiewchanwattana, S. (2016, July 1315). Multilevel thresholding
for satellite image segmentation with moth-flame based optimization. In The 13th International
Joint Conference on Computer Science and Software Engineering. Khon Kaen, Thailand.
Nanda, S. J. (2016, September 2124). Multi-objective Moth Flame Optimization. In Advances in
Computing, Communications and Informatics (ICACCI). Jaipur, India: IEEE.
Parmar, S. A., Pandya, M. H., Bhoye, M., Trivedi, I. N., Jangir, P., & Ladumor, D. (2016, April 7
8). Optimal active and Reactive Power dispatch problem solution using Moth-Flame Optimizer
algorithm. In International Conference on Energy Efcient Technologies for Sustainability
(ICEETS). Nagercoil, India: IEEE.
Raju, M., Saikia, L. C., & Saha, D. (2016, November 2225). Automatic generation control in
competitive market conditions with moth-flame optimization based cascade controller. In
Region 10 Conference (TENCON). Marina Bay Sands, Singapore: IEEE.
Soliman, G. M. A., Khorshid, M. M. H., & Abou-El-Enien, T. H. M. (2016, July). Modied
moth-flame optimization algorithms for terrorism prediction. International Journal of
Application or Innovation in Engineering and Management, 5, 4758.
Trivedi, I. N., Kumar, A., Ranpariya, A. H., & Jangir, P. (2016, April 78). Economic Load
Dispatch problem with ramp rate limits and prohibited operating zones solve using Levy Flight
Moth-Flame optimizer. In International Conference on Energy Efcient Technologies for
Sustainability (ICEETS). Nagercoil, India.
Yamany, W., Fawzy, M., Tharwat, A., & Hassanien, A. E. (2015, December 2930). Moth-flame
optimization for training multi-layer perceptrons. In 11th International Computer Engineering
Conference (ICENCO). Giza, Egypt: IEEE.
Zawbaa, H. M., Emary, E., Parv, B., & Sharawi, M. (2016, July 2429). Feature selection
approach based on moth-flame optimization algorithm. In Evolutionary Computation (CEC).
IEEE.
Chapter 14
Crow Search Algorithm (CSA)
14.1 Introduction
In the last several decades, optimization played a crucial role in many aspects of
various problems, including but not limited to engineering problems. Often, such
problems include complicated objective functions, numerous decision variables,
and a considerable number of constraints, which adds complexity to an already
complicated optimization problem. The aforementioned characteristics limit the
efciency of traditional optimization techniques. Consequently, the search for an
alternative method leads to a new eld of studyswarm intelligence (SI), which
was introduced by Beni and Wang in the late 1980s (Bei and Wang 1993). SI,
ultimately, aims to imitate the social intelligence of the natures group living
creatures (Bonabeau et al. 1999). Each newly proposed algorithm attempts to
improve two main features: (1) decreasing the distance between the reported
solutions and the actual global optima; and/or (2) reducing the solution searching
time. Although each proposed optimization algorithm has its unique characteristics,
with both merits and drawbacks, it has been proven that there is no single algorithm
that could outperform all its rivals (Wolpert and Macready 1997). Subsequently, a
wide range of alternative novel optimization algorithms have been proposed, each
of which has its exclusive advantages.
One of these newly proposed algorithms is the crow search algorithm (CSA),
which was initially introduced by Askarzadeh (2016). CSA attempts to imitate the
social intelligence of crow flock and their food gathering process. The primary
results illustrated the improved efciency of CSA over many conventional opti-
mization algorithms, such as genetic algorithm (GA), particle swarm optimization
(PSO), and harmony search (HS), in both convergence time and the accuracy of the
results (Askarzadeh 2016). Ultimately, it can be concluded that CSA is a proper
alternative method for solving complex engineering optimization problems.
Crows are a widely distributed genus of birds, which have been credited with
intelligence throughout folklore. Recent experiments investigating the cognitive
abilities of crows have begun to reveal the intelligence capability of these species
(Emery and Clayton 2004, 2005; Prior et al. 2008). The studies have demonstrated
that some species of crows are not only superior in intelligence to other birds but
also rival many nonhuman primates. Observations of the crows tool use in the wild
are an example of their complex cognition (Emery and Clayton 2004). Further
studies have also revealed their self-awareness, face recognition capabilities,
sophisticated communication techniques, and food storing and retrieving skills
across seasons (Emery and Clayton 2005; Prior et al. 2008).
Interestingly, a crow individual has a tendency to tap into the food resources of
other species, including the other crow members of the flock. In fact, each crow
attempts to hide its excess food in a hideout spot and retrieve the stored food in the
time of need. However, the other members of the flock, which have their own food
reservation spots as well, try to tail one another to nd these hiding spots and
plunder the stored food. Nevertheless, if a crow senses that it has been pursuited by
other members of the flock, in order to lose the tail and deceive the plunderer, it
maneuvers its path into a fallacious hideout spot (Clayton and Emery 2005).
Plainly, the aforementioned is the core principles of the CSA, in which each crow
individual searches the decision space for hideouts with the best food resources
(i.e., the global optima from the point of view of optimization). Thus, each crow
individuals motion is induced by two main features: (1) nding the hideout spots
of the other members of the flock; and (2) protecting its own hideout spots.
In the standard CSA, the flock of crows spread and search throughout the
decision space for the perfect hideout spots (global optima). Since any efcient
14 Crow Search Algorithm (CSA) 145
in which ri = a random number with the uniform distribution and the range of [0,1];
and fl(i,t) = flight length of the ith crow individual at the tth iteration.
It is worth to be mentioned that fl(i,t) is one of the algorithms parameters and it
can affect the searching capability of the algorithm. Assume that smaller values of fl
lead to the local search at the vicinity of x(i,t), while larger values of fl would widen
the searching space. In terms of optimization, smaller values of fl would help
intensify the results, while larger values of fl would diversify the results. Both
well-intensication and -diversication are the characteristics of an efcient opti-
mization algorithm (Gandomi et al. 2013).
There could also be the case, in which the jth crow individual would sense that it
had been tailed by one of the members of the flock (say the ith crow). As a result, in
order to protect its food supply from the plunderer, the jth crow would deceitfully
fly over a non-hideout spot. To imitate such an action in the CSA, a random place in
the d-dimensional decision space would be assumed for the ith crow.
In summary, the tailing motion of crow individuals for the aforementioned two
cases can be expressed as (Askarzadeh 2016)
xi;t ri fli;t mj;t xi;t rj APj;t
xi;t 1 14:2
a random position otherwise
146 B. Zolghadr-Asli et al.
in which rj = a random number with the uniform distribution and the range of [0,1];
and AP(j,t) = the awareness probability of the jth crow at the tth iteration.
As mentioned previously, an efcient metaheuristic algorithm should provide a
good balance between diversication and intensication (Yang 2010). In the CSA,
intensication and diversication are mainly controlled by two parameters: the
flight length (fl) and the awareness probability (AP). By decreasing the awareness
probability, the chance of detecting the hideout spots by the members of the crow
flock would increase. As a result, CSA tends to focus the search on the vicinity of
the hideout spots. Thus, it can be assumed that smaller values of AP would amplify
the intensication aspect of the algorithm. On the other hand, by increasing the
awareness probability, the flock of crows is more likely to search the decision space
in a random manner for, in fact, such an action would decrease the chance of
discovering the real hideout spots by the plunderers. As a result, larger values of AP
would amplify the diversication aspect of the algorithm.
in which f[] = objective function. These steps are repeated until the termination
criterion is satised. At that point, the best position that is memorized by the
members of the crow flock is reported as the optimum solution. Figure 14.1
illustrates the flowchart of the standard CSA. Additionally, Table 14.1 summarizes
the characteristics of the CSA.
148 B. Zolghadr-Asli et al.
14.5 Conclusion
This chapter described the crow search algorithm (CSA), which is a novel, yet
relatively new metaheuristic optimization algorithm, based on the intelligent
behavior of crows. CSA is a population-based optimization algorithm, with mainly
two adjustable parameters (flight length and awareness probability). Such charac-
teristics make CSA a viable option for complex engineering optimization problems.
In the nal section, a pseudocode of the standard CSA was also presented.
References
15.1 Introduction
In the past decades, the natural swarming behavior of species has been the source of
inspiration for a wide range of metaheuristic optimization algorithms. In fact, the
aforementioned is the fundamental basis of swarm intelligence (SI), which was rst
proposed by Beni and Wang in the late 1980s (Beni and Wang 1993). SI, ulti-
mately, aims to simulate the collective and social intelligence of natures group
living creatures (Bonabeau et al. 1999). Although both SI and traditional evolu-
tionary algorithms (EAs), such as genetic algorithm (GA), have undeniable
The DA was initially proposed by Mirjalili (2016), and the preliminary studies
have demonstrated its potential to outperform existing algorithms in solving both
benchmark test problems and complicated engineering problems of computational
fluid dynamics (CFD). The DA was also modied to better deal with binary [binary
dragonfly algorithm (BDA)], and multi-objective optimization problems
[multi-objective dragonfly algorithm (MODA)] (Mirjalili 2016). As a compatible
and efcient algorithm, the DA can be an already promising alternative for solving
complex engineering optimization problems. The following sections will focus on
the basic principles of a standard DA.
X
N
Si;t Xi;t Xj;t 15:1
j1
in which X(i,t) = position of the ith dragonfly individual in the tth iteration; X(j,
t) = position of the jth neighboring dragonfly individual in the tth iteration;
N = number of neighboring dragonfly individuals; and S(i,t) = separation motion for
the ith dragonfly individual in the tth iteration.
The alignment motion is calculated by Mirjalili (2016):
PN
j1 Vj;t
Ai;t 15:2
N
in which V(j,t) = velocity of the jth neighboring dragonfly individual in the tth
iteration; and A(i,t) = alignment motion for the ith dragonfly individual in the tth
iteration.
The cohesion motion can be measured by Mirjalili (2016):
PN
j1 Xj;t
Ci;t Xi;t 15:3
N
in which C(i,t) = cohesion motion for the ith dragonfly individual in the tth iteration.
The food attraction motion is calculated by Mirjalili (2016):
in which X(food,t) = position of the food source in the tth iteration; and F(i,t) = food
attraction motion for the ith dragonfly individual in the tth iteration. The food is
considered as the dragonfly individual with the best objective function observed so
far.
The predator distraction is quantied by Mirjalili (2016):
in which X(enemy,t) = position of the predator in the tth iteration; and E(i,t) = predator
distraction motion for the ith dragonfly individual in the tth iteration. The predator
is considered as the dragonfly individual with the worst objective function observed
so far.
The combination of the aforementioned motions can predict the corrective
pattern of dragonfly individuals in each iteration. The positions of dragonflies
15 Dragonfly Algorithm (DA) 155
individuals are updated in each iteration using the current position of the dragonfly
individual [X(i,t)] and a step vector [DX(i,t)]. In fact, the introduced step vector is
analogous to the velocity vector in the particle swamps optimization
(PSO) algorithm, and the procedure for updating the positions of dragonfly indi-
viduals in the DA is based on the framework of the PSO algorithm. The step vector,
which demonstrates the motion orientation for each dragonfly individual, is dened
as Mirjalili (2016):
However, in order to increase the odds of investigating the entire decision space
by any optimization algorithm, a random motion needs to be introduced to the
searching mechanism. As a result, to improve the randomness, stochastic behavior,
and exploration of the dragonfly individuals, they are required to fly around the
search space using random walk (Lvy flight) when no neighboring solutions in the
vicinity are detected. In this case, the positions of dragonflies are updated by
Mirjalili (2016):
in which d = number of decision variables; and Lvy(d) = Lvy flight function that
is given by Yang (2010):
r1 r
Levyd 0:01 1 15:9
j r2 j b
0 1b1
C1 b sin pb
r@ A
2
15:10
C 1 2 b b 2 2
b1
Like most SI-based optimization algorithms, the DA starts the optimization process
by creating a set of random solutions for a given optimization problem. Naturally,
the number of initial dragonfly individuals (M) can influence the performance of the
DA. The bigger populations increase the chance of nding the global optima, while
increasing the calculation time for each iteration, and in turn the entire optimization
problem. After determining the positions of dragonflies within the lower and upper
boundaries of any given variable, the position of each dragonfly individual is
updated in each iteration by calculating the step position vector for each individual
dragonfly, using the motions induced by separation, alignment, cohesion, food
attraction, and predator distraction. The position updating process is continued
iteratively until the termination criterion is satised. Table 15.1 lists the charac-
teristics of the DA. Additionally, the flowchart of the DA is shown in Fig. 15.2.
15 Dragonfly Algorithm (DA) 157
Begin
Define population size (M)
For i=1:M
Calculate
Separation motion
Alignment motion
Cohesion motion
Else
Update position vector using the Lvy flight function
End if
End for i
Sort the population/dragonflies from best to worst and find the current best
End while
Post-process and visualize the results
End
15 Dragonfly Algorithm (DA) 159
15.5 Conclusion
This chapter described the dragonfly algorithm (DA) which is a novel, yet newly
introduced metaheuristic optimization algorithm. After a brief review of the pre-
vious applications of the DA, the standard DA and its mechanism were described.
In the nal section, a pseudo-code of the standard DA was also presented.
References
Beni, G., & Wang, J. (1993). Swarm intelligence in cellular robotic systems. In: P. Dario, G.
Sandini, & P. Aebischer (Eds.), Robots and biological systems: Towards a new bionics?
Berlin, Heidelberg, New York, NY: Springer.
Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm intelligence: From natural to articial
systems. New York, NY: Oxford University Press.
Gandomi, A. H., & Alavi, A. H. (2012). Krill herd: A new bio-inspired optimization algorithm.
Communications in Nonlinear Science and Numerical Simulation, 17(12), 48314845.
Gandomi, A. H., Yang, X. S., & Alavi, A. H. (2013). Cuckoo search algorithm: A metaheuristic
approach to solve structural optimization problems. Engineering with Computers, 29(1),
1735.
Mirjalili, S. (2016). Dragonfly algorithm: A new meta-heuristic optimization technique for solving
single-objective, discrete, and multi-objective problems. Neural Computing and Applications,
27(4), 10531073.
Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. In
Proceedings of the 14th annual conference on computer graphics and interactive techniques,
New York, NY, July 2731.
Russell, R. W., May, M. L., Soltesz, K. L., & Fitzpatrick, J. W. (1998). Massive swarm migrations
of dragonflies (Odonata) in eastern North America. The American Midland Naturalist, 140(2),
325342.
Thorp, J. H., & Rogers, D. C. (Eds.). (2014). Thorp and Covishs freshwater invertebrates:
Ecology and general biology (Vol. 1). Amsterdam, Netherland: Elsevier.
Wikelski, M., Moskowitz, D., Adelman, J. S., Cochran, J., Wilcove, D. S., & May, M. L. (2006).
Simple rules guide dragonfly migration. Biology Letters, 2(3), 325329.
Yang, X. S. (2010). Nature-inspired metaheuristic algorithms. Frome, UK: Luniver Press.