Download full New Advancements in Swarm Algorithms: Operators and Applications Erik Cuevas ebook all chapters
Download full New Advancements in Swarm Algorithms: Operators and Applications Erik Cuevas ebook all chapters
com
https://textbookfull.com/product/new-advancements-in-swarm-
algorithms-operators-and-applications-erik-cuevas/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/advances-in-metaheuristics-
algorithms-methods-and-applications-erik-cuevas/
textboxfull.com
https://textbookfull.com/product/swarm-intelligence-algorithms-
modifications-and-applications-1st-edition-adam-slowik/
textboxfull.com
https://textbookfull.com/product/glowworm-swarm-optimization-theory-
algorithms-and-applications-1st-edition-krishnanand-n-kaipa/
textboxfull.com
https://textbookfull.com/product/generalized-fractional-calculus-new-
advancements-and-applications-george-a-anastassiou/
textboxfull.com
Nature-inspired Computation and Swarm Intelligence:
Algorithms, Theory and Applications 1st Edition Xin-She
Yang (Editor)
https://textbookfull.com/product/nature-inspired-computation-and-
swarm-intelligence-algorithms-theory-and-applications-1st-edition-xin-
she-yang-editor/
textboxfull.com
https://textbookfull.com/product/human-factors-in-practice-concepts-
and-applications-1st-edition-haydee-m-cuevas/
textboxfull.com
https://textbookfull.com/product/operators-between-sequence-spaces-
and-applications-1st-edition-bruno-de-malafosse/
textboxfull.com
https://textbookfull.com/product/cardiovascular-engineering-
technological-advancements-reviews-and-applications-dyah-ekashanti-
octorina-dewi/
textboxfull.com
https://textbookfull.com/product/thermal-imaging-types-advancements-
and-applications-1st-edition-angel-m-desmarais/
textboxfull.com
Intelligent Systems Reference Library 160
Erik Cuevas
Fernando Fausto
Adrián González
New Advancements
in Swarm Algorithms:
Operators and
Applications
Intelligent Systems Reference Library
Volume 160
Series Editors
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
Adrián González
New Advancements
in Swarm Algorithms:
Operators and Applications
123
Erik Cuevas Fernando Fausto
CUCEI, Universidad de Guadalajara CUCEI, Universidad de Guadalajara
Guadalajara, Mexico Guadalajara, Mexico
Adrián González
CUCEI, Universidad de Guadalajara
Guadalajara, Mexico
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The most common term for methods that employ stochastic schemes to produce
search strategies is metaheuristics. In general, there not exist strict classifications
of these methods. However, several kinds of algorithms have been coined
depending on several criteria such as the source of inspiration, cooperation among
the agents or type of operators.
From the metaheuristic methods, it is considered a special set of approaches
which are designed in terms of the interaction among the search agents of a
group. Members inside the group cooperate to solve a global objective by using
local accessible knowledge that is propagated through the set of members. With this
mechanism, complex problems can be solved more efficiently than considering the
strategy of single individual. In general terms, this group is referred to as a swarm,
where social agents interact with each other in a direct or indirect manner by using
local information from the environment. This cooperation among agents produces
an effective distributive strategy to solve problems. Swarm intelligence (SI)
represents a problem-solving methodology that results from the cooperation among
a set of agents with similar characteristics. During this cooperation, local behaviors
of simple elements produce the existence of complex collective patterns.
The study of biological entities such as animals and insects which manifest a
social behavior has produced several computational models of swarm intelligence.
Some examples include ants, bees, locust swarms, spiders and bird flocks. In the
swarm, each agent maintains a simple strategy. However, due to its social behavior,
the final collective strategy produced by all agents is usually very complex. The
complex operation of a swarm is a consequence of the cooperative behavior among
the agents generated during their interaction.
The complex operation of the swarm cannot be reduced to the aggregation of
behaviors of each agent in the group. The association of all simple agent behaviors
is so complex that usually is not easy to predict or deduce the global behavior of the
whole swarm. This concept is known as emergence. It refers to the process of
produce complex behavioral patterns from the iteration of simple and unsophisti-
cated strategies. Something remarkable is that these behavioral patterns appear
without the existence of a coordinated control system but emerge from the
v
vi Preface
this methodology as the best way to assist researchers, lecturers, engineers, and
practitioners in the solution of their own optimization problems.
This book has been structured so that each chapter can be read independently
from the others. Chapter 1 describes the main characteristics and properties of
metaheuristic and swarm methods. This chapter analyses the most important con-
cepts of metaheuristic and swarm schemes.
Chapter 2 discusses the performance and main applications of each metaheuristic
and swarm method in the literature. The idea is to establish the strength and
weaknesses of each traditional scheme from practical perspective.
The first part of the book that involves Chaps. 3, 4, 5, and 6 present recent
swarm algorithms their operators and characteristics. In Chap. 3, an interesting
swarm optimization algorithm called the Selfish Herd Optimizer (SHO) is presented
for solving global optimization problems. SHO is based on the simulation of the
widely observed selfish herd behavior manifested by individuals within a herd of
animals subjected to some form of predation risk. In SHO, individuals emulate the
predatory interactions between groups of prey and predators by two types of search
agents: the members of a selfish herd (the prey) and a pack of hungry predators.
Depending on their classification as either a prey or a predator, each individual is
conducted by a set of unique evolutionary operators inspired by such prey–predator
relationship. These unique traits allow SHO to improve the balance between
exploration and exploitation without altering the population size. The experimental
results show the remarkable performance of our proposed approach against those
of the other compared methods, and as such SHO is proven to be an excellent
alternative to solve global optimization problems.
Chapter 4 considers a recent swarm algorithm called the Social Spider
Optimization (SSO) for solving optimization tasks. The SSO algorithm is based on
the simulation of cooperative behavior of social spiders. In the proposed algorithm,
individuals emulate a group of spiders which interact with each other based on the
biological laws of the cooperative colony. The algorithm considers two different
search agents (spiders): males and females. Depending on gender, each individual is
conducted by a set of different evolutionary operators which mimic different
cooperative behaviors that are typically found in the colony. In order to illustrate the
proficiency and robustness of the proposed approach, it is compared to other
well-known evolutionary methods. The comparison examines several standard
benchmark functions that are commonly considered within the literature of evo-
lutionary algorithms. The outcome shows a high performance of the proposed
method for searching a global optimum with several benchmark functions.
In Chap. 5, a swarm algorithm called Locust Search (LS) is presented for solving
optimization tasks. The LS algorithm is based on the simulation of the behavior
presented in swarms of locusts. In the proposed algorithm, individuals emulate a
group of locusts which interact with each other based on the biological laws of the
cooperative swarm. The algorithm considers two different behaviors: solitary and
social. Depending on the behavior, each individual is conducted by a set of evo-
lutionary operators which mimic the different cooperative behaviors that are typi-
cally found in the swarm. In order to illustrate the proficiency and robustness of the
Preface ix
xi
xii Contents
wide-ranging and numerous, so much that the development of methods for solving
such problems has remained a hot topic for many years.
Traditionally, optimization techniques can be roughly classified as either deter-
ministic or stochastic [3]. Deterministic optimization approaches, which design heav-
ily relies on mathematical formulation and its properties, are known to have some
remarkable advantages, such as fast convergence and implementation simplicity [4].
On the other hand, stochastic approaches, which resort to the integration of random-
ness into the optimization process, stand as promising alternatives to deterministic
methods for being far less dependent on problem formulation and due to their ability
to thoroughly explore a problems design space, which in turn allow them to overcome
local optima more efficiently [5]. While both deterministic and stochastic methods
have been successfully applied to solve a wide variety of optimization problems,
these classical approaches are known to be subject to some significant limitations;
first of all, deterministic methods are often conditioned by problem properties (such
as differentiability in the case of gradient-based optimization approaches) [6]. Fur-
thermore, due to their nature, deterministic methods are highly susceptible to get
trapped into local optima, which is something undesirable for most (if not all) appli-
cations. As for stochastic techniques, while these are far easier to adapt to most
black-box formulations or ill-behaved optimization problems, these methods tend to
have a notably slower convergence speed in comparison to their deterministic coun-
terparts, which naturally pose as an important limitation for applications where time
is critical.
The many shortcomings of classical methods, along with the inherent challenges
of real-life optimization problems, eventually lead researchers to the development of
heuristics as an alternative to tackle such complex problems [1]. Generally speak-
ing, a heuristic could be described as a technique specifically tailored for solving
specific problems, often considered too difficult to handle with classic techniques.
In this sense, heuristics trade essential qualities such as optimality, accuracy, pre-
cision or completeness to, either solve a problem in reasonably less time or to find
an approximate solution in situations in which traditional methods fail to deliver an
exact solution. However, while heuristic methods have demonstrated to be excellent
to handle otherwise hard to solve problems, there are still subject to some issues. Like
most traditional approaches, heuristics are usually developed by considering at least
some specifications about the target problem, and as such, it is hard to apply them to
different problems without changing some or most of their original framework [7].
Recently, the idea of developing methodologies that could potentially solve a
wide variety of problems in a generic fashion has caught the attention of many
researchers, leading to the development of a new breed of “intelligent” optimization
techniques formally known as metaheuristics [8]. A metaheuristic is a particular kind
of heuristic-based methodology, devised with the idea of being able to solve many
different problems without the need of changing the algorithms basic framework.
For this purpose, metaheuristic techniques employ a series of generic procedures
and abstractions aimed to improve a set of candidate solution iteratively. With that
being said, metaheuristics are often praised due to their ability to find adequate
solutions for most problems independently of their structure and properties.
1.2 The Rise of Nature-Inspired Metaheuristics 3
The word “nature” refers to many phenomena observed in the physical world. It com-
prises virtually everything perceptible to our senses and even some things that are
not as easy to perceive. Nature is the perfect example of adaptive problem solving;
it has shown countless times how it can solve many different problems by applying
an optimal strategy, suited to each particular natural phenomenon. Many researchers
around the world have become captivated by how nature can adapt to such an exten-
sive array of situations, and for many years they have tried to emulate these intriguing
problem-solving schemes to develop tools with real-world applications. In fact, for
the last two decades, nature has served as the most important source of inspiration
in the development of metaheuristics. As a result of this, a whole new class of opti-
mization techniques was given birth in the form of the so-called Nature-inspired
optimization algorithms. These methods (often referred as bio-inspired algorithms)
are a particular kind of metaheuristics, developed with a single idea in mind: mimick-
ing a biological or a physical phenomenon to solve optimization problems. With that
being said, depending on their source of inspiration, nature-inspired metaheuristics
can be classified in four main categories: evolution-based, swarm-based, physics-
based and human-based methods [9, 10]. Evolution-based methods are developed by
drawing inspiration in the laws of natural evolution. From these methods, the most
popular is without a doubt the Genetic Algorithms approach, which simulates Dar-
winian evolution [11]. Other popular methods grouped within this category include
Evolution Strategy [12], Differential Evolution [13] and Genetic Programming [14].
On the other hand, swarm-based techniques are devised to simulate the social and
collective behavior manifested by groups of animals (such as birds, insects, fishes,
and others). The Particle Swarm Optimization [15] algorithm, which is inspired in
the social behavior of bird flocking, stands as the most representative and successful
example within this category, although other relevant methods include Ant Colony
Optimization [16], Artificial Bee Colony [17], Firefly Algorithm [18], Social Spi-
der Optimization [19], among others. Also, there are the physics-based algorithms,
which are developed with the idea of emulating the laws of physics observed within
our universe. Some of the most popular methods grouped within this category are
Simulated Annealing [20], Gravitational Search Algorithm [21], Electromagnetism-
like Mechanism [22], States of Matter Search [23], to name a few. Finally, we can
mention human-based algorithms. These kind of nature-inspired methods are unique
due to the fact that they draw inspiration from several phenomena commonly associ-
ated with humans’ behaviors, lifestyle or perception. Some of the most well-known
methods found in the literature include Harmony Search [24], Firework Algorithm
[25], Imperialist Competitive Algorithm [26], and many more.
Most nature-inspired methods are modeled as population-based algorithms, in
which a group of randomly generated search agents (often referred as individuals)
explore different candidate solutions by applying a particular set of rules derived from
some specific natural phenomenon. This kind of frameworks offer important advan-
tages in both, the interaction among individuals, which promotes a wider knowledge
4 1 An Introduction to Nature-Inspired Metaheuristics …
about different solutions, and the diversity of the population, which is an important
aspect on ensuring that the algorithm has the power to efficiently explore the design
space while also being able to overcome local optima [8]. Due to this and many other
distinctive qualities, nature-inspired methods have become a popular choice among
researchers. As a result, literature related to nature-inspired optimization algorithms
and its applications for solving otherwise challenging optimization problems has
become extremely vast, with hundreds of new papers being published every year.
In this chapter, we analyze some of the most popular nature-inspired optimization
methods currently reported on the literature, while also discussing their impact on
the current literature. The rest of this chapter is organized as follows: in Sect. 1.2,
we analyze the general framework applied by most nature-inspired metaheuristics
in terms of design. In Sect. 1.3, we present nature-inspired methods according to
their classification while also reviewing some of the most popular algorithms for
each case. Finally, in Sect. 1.4, we present a brief study concerning the growth in the
number of publications related to nature-inspired methods.
where the elements xi,n represent the decision variables (parameters) related to a
given optimization problem, while d denotes the dimensionality (number of decision
variables) of the target solution space.
From an optimization point of view, each set of parameters xi ∈ X (also known
as an individual) is considered as a candidate solution for the specified optimization
task; as such, each of these solutions is also assigned with a corresponding quality
value (or fitness) related to the objective function f (·) that describes the optimization
task, such that:
f i = f (xi ) (1.2)
where xi denotes the candidate solution generated by adding up a specified update
vector xi to xi . It is worth noting that the value(s) adopted by the update vector
xi depend on the specific operators employed by each individual algorithm.
Finally, most nature-inspired algorithms include some kind of selection process,
in which the newly generated solutions are compared against those in the current
population Xk (with k denoting the current iteration) in terms of solution quality,
typically with the purpose of choosing the best individual(s) among them. As a result
of this process, a new set of solutions Xk+1 = x1k+1 , x2k+1 , . . . , xnk+1 , corresponding
to the following iteration (or generation) ‘k + 1’, is generated.
This whole process is iteratively repeated until a particular stop criterion is met
(i.e., if a maximum number of iterations is reached). Once this happens, the best
solution found by the algorithm is reported as the best approximation for the global
optimum [2].
kFurthermore, for the crossover operation, DE generates a trial solution vector uik =
u i,1 , u i,2 , . . . , u i,d corresponding to each population member ‘i’. The components
k k
k
u i,n in such a trial vector are given by combining both the candidate solution xik and
its respective mutant solution mik as follows:
k
m i,n if (rand ≤ C R) or n = n ∗
k
u i,n = for n = 1, 2, . . . , d (1.5)
xi,n if(rand > C R) otherwise
k
Evolution Strategies (ES) are a series of optimization techniques which draw inspi-
ration from natural evolution [12]. The first ES approach was introduced by Ingo
Rechenberg in the early 1960s and further developed during the 1970s. The most
straightforward ES approach is the so-called (1 + 1)-ES (or two-membered ES).
This approach considers the existence of only a single parent x = [x1 , x2 , . . . , xd ],
which is assumed to be able to produce a new candidate solution (offspring)
x = x1 , x2 , . . . , xd by means of mutation as follows:
x = x + N(0, σ ) (1.7)
where N(0, σ ) denotes a d-dimensional random vector whose values are drawn
from a Gaussian distribution of mean 0 and fixed standart deviation σ (although later
approaches consider a dynamic value based on the number of successful mutations)
[12].
8 1 An Introduction to Nature-Inspired Metaheuristics …
Genetic Algorithms (GA) is one of the earliest metaheuristics inspired in the concepts
of natural selection and evolution and is among the most successful Evolutionary
Algorithms (EA) due to its conceptual simplicity and easy implementation [37]. GA
was initially developed by John Henry Holland in 1960 (and further extended in
1975) with the goal to understand the phenomenon of natural adaptation, and how
this mechanism could be implemented into computers systems to solve complex
problems.
1.4 Classification of Nature-Inspired Metaheuristics 9
In GA, a population of N solutions xi = xi,1 , xi,2 , . . . , xi,d is first initialized;
each of such solutions (called chromosomes) comprises a bitstring (this is, xi,n ∈
{0, 1}), which further represents a possible solution for a particular binary problem.
At each iteration (also called generation) of GA’s evolution process, the chromosome
population is modified by applying a set of three evolutionary operators, namely:
selection, crossover and mutation. For the selection operation, GA randomly selects
a pair of chromosomes x p1 and x p2 (with p1 , p2 ∈ {1, 2, . . . , N } and p1 = p2 )
from within the entire chromosome population, based on their individual selection
probabilities. The probability Pi for a given chromosome ‘i’ (xi ) to be selected
depends on its quality (fitness value), as given as follows:
f (xi )
Pi = N (1.10)
j=1 f xj
Then, for the crossover operation, the bitstring information of the selected chro-
mosomes (now called parents) is recombined to produce two new chromosomes, xs1
and xs2 (referred as offspring) as follows:
x p1 ,n if (n < l) x p2 ,n if(n < l)
xs1 ,n = xs2 ,n = for n = 1, 2, . . . , d. (1.11)
x p2 ,n otherwise x p1 ,n otherwise
where xsr ,n (with r ∈ {1, 2}) stand for the jth element (bit) of the sr th offspring,
while rand stand for a randomumber form within the interval of 0 and 1.
This process of selection, crossover, and mutation of individuals takes place until
a population of N new chromosomes (mutated offspring) has been produced, and
then, the N best chromosomes among the original and new populations are taken for
the next generation, while the remainder individuals are discarded [38–41].
The Ant Colony Optimization (ACO) algorithm is one of the most well-known nature-
inspired metaheuristics. The ACO approach was first proposed by Marco Dorigo in
1992 under the name of Ant Systems (AS) and draws inspiration in the natural
behavior of ants [47]. In nature, ants move randomly while foraging for food, and
when an appropriate source is found, they return to their colony while leaving a
pheromone trail behind. Ants are able to guide themselves toward previously found
food source by following the path traced by pheromones left by them or other ants.
However, as time passes, pheromones start to evaporate; intuitively, the more time an
ant takes to travel down a given path back and forth, the more time the pheromones
have to dissipate; on the other hand, shorter paths are traversed more frequently,
promoting that pheromone density becomes higher in comparison to that on longer
routes. In this sense, if an ant finds a good (short) path from the colony to a food source,
others members are more likely to follow the route traced by said ant. The positive
feedback provided by the increase in pheromone density through paths traversed by
an increasing number of ants eventually lead all members of the colony to follow a
single optimal route [16].
The first ACO approach was conceived as an iterative process devised to handle
the task of finding optimal paths in a graph [47]. For this purpose, ACO considers a
population of N ants which move through the nodes and arcs of a graph G(N , P)
(with N and P denoting its respective sets of nodes and arcs, respectively). Depend-
ing on their current state (node), each ant is able to choose from among a set of
adjacent paths (arc) to traverse based on the pheromone density and length associ-
ated to each of them. With that being said, at each iteration ‘k’, the probability for
a given ant ‘i’ to follow a specific path ‘x y’ (which connects states ‘x’ and ‘y’) is
given by the following expression:
α · τ(xk y) β · η(x
k
y)
pik(x y) = (1.13)
z∈Yx α · τ(xk z) β · η(x
k
z)
where, τ(xk y) denotes the pheromone density over the given path ‘x y’, while η(x k
y)
stand for the preference for traversing said path, which is relative to its distance
(cost). Furthermore, Yx represent the set of all adjacent states for the given current
state ‘x’. Finally, α and β are constant parameters used to control the influence of
τ(xk y) and η(x
k
y) , respectively.
By applying this mechanism, each ant moves through several paths within the
graph until a specific criterion is met (i.e., that a particular destination node has been
reached). Once this happens, each ant backtracks its traversed route while releasing
some pheromones on each the paths they used. In ACO, the amount of pheromones
released by an ant ‘i’ over any given path ‘x y’ is given by:
12 1 An Introduction to Nature-Inspired Metaheuristics …
Q/L i if the ant used the path xy in its tour
τik(x y) = (1.14)
0 otherwise
where, L i denotes the length (cost) associated to the route taken by the ant ‘i’, while
Q stand for a constant value.
Finally, ACO includes a procedure used to update the pheromone density over all
paths in the graph for the following iteration (k + 1). For this purpose, it considers
both, the amount of pheromones released by each ant while backtracking its traced
route and the natural dissipation of pheromones which takes place as time passes.
This is applied by considering the following expression:
y) = (1 − ρ) · τ(x y) +
τ(xk+1 k
τik(x y) (1.15)
i=1
Bees are among the most well-known example of insects which manifest a col-
lective behavior, either for food foraging or mating. Based on this premise, many
researchers have proposed several different swarm intelligence approaches inspired
by the behavior of bees. In particular, the Artificial Bee Colony (ABC) approach
proposed by Dervis Karaboga and Bahriye Basturk in 2007 is known to be among
the most popular of these bee-inspired methods [48].
In the ABC approach, search agents are represented as a colony of artificial honey
bees which explore a d-dimentional search space while looking for optimal food
(nectar) sources. The locations of these food sources each represent a possible solu-
tions for a given optimization problem and their amount of nectar (quality) is related
to the fitness value associated to each of such solutions. Furthermore, the members of
the bee colony are divided in three groups: employed bees, onlooker bees and scout
bees. Each of these groups of bees has distinctive functions inspired in the mechanics
employed by bees while foraging for food. For example, the employed bees com-
prises the members of the colony which function is to explore the surroundings of
individually-known food sources in the hopes of finding places with greater amounts
of nectar. In addition, employed bees are able to share the information of currently
known food sources with the rest of the members of the colony, so that they can also
exploit them. With that being said, at each iteration ‘k’ of ABC’s search process,
each employed bee generates a new candidate solution vi around a currently known
food source xi as follows:
vik = xik + φ xik − xrk (1.16)
1.4 Classification of Nature-Inspired Metaheuristics 13
where xik denotes the location of the food source remembered by a particular
employed bee ‘i’ while xrk (with r = i) stands for the location of any other ran-
domly chosen food source. Furthermore, φ is a random number drawn from within
the interval [−1, 1].
On the other hand, onlooker bees can randomly visit any food source known by
the employed bees. For this purpose, each available food source is assigned with a
certain probability of being visited by an onlooker bee as follows:
f xik
Pi =
k
(1.17)
N k
j=1 f x j
Similarly to the employed bees, once an onlooker bee has decided to visit a
particular food source, a new candidate solution vik is generated around the chosen
location xik by applying Eq. (1.16). Furthermore, any candidate solution vik generated
by either an employed or an onlooker bee is compared against its originating location
xik in terms of solution quality, and then, the best among them is chosen as the new
food source location for the following iteration; this is:
vik if f xik < f vik
xik+1 = (1.18)
xik otherwise
Finally, scout bees are the members of the colony whose function is to explore
the whole terrain for new food sources randomly. Scout bees are deployed to look
for new solutions only if a currently known food source is chosen to be “abandoned”
(a thus forgotten by all members of the colony). In ABC, a solution is considered
to be abandoned only if it cannot be improved by either the employed or onlooker
bees after a determined number of iterations, indicated by the algorithm’s parameter
“limit”. This mechanism is important for the ABC approach since it allows it to
keep the diversity of solutions during the search process.
In general, ABC’s local search performance may be attributed to the neighborhood
exploration and greedy selection mechanisms applied by the employed and onlooker
bees, while the global search performance is mainly related to the diversification
attributes of scout bees.
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com