0% found this document useful (0 votes)
11 views70 pages

Nia, Si

The document discusses nature-inspired approaches and swarm intelligence, highlighting the importance of optimization in intelligent systems, expert systems, and adaptive learning. It details various algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), explaining their mechanisms, advantages, and limitations. Additionally, it explores concepts like multi-agent coordination and learning, as well as different inertia weight strategies in PSO to enhance performance in dynamic environments.

Uploaded by

lamya.gandhi4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views70 pages

Nia, Si

The document discusses nature-inspired approaches and swarm intelligence, highlighting the importance of optimization in intelligent systems, expert systems, and adaptive learning. It details various algorithms such as Particle Swarm Optimization (PSO) and Ant Colony Optimization (ACO), explaining their mechanisms, advantages, and limitations. Additionally, it explores concepts like multi-agent coordination and learning, as well as different inertia weight strategies in PSO to enhance performance in dynamic environments.

Uploaded by

lamya.gandhi4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 70

Nature Inspired Approaches

and Swarm Intelligence

By
Ritu Tiwari
Introduction
• Due to advancements in the field of automation, and knowing the fact that natural resources are on the verge
of exhaustion, scientists, today, are paying a lot of attention on the low resource (regarding power, time &
occupancy) factor which can be achieved through optimization. Optimization can be achieved at many levels.

• There is a requirement for the system to adapt to the appearance of new data and the combination of
situations. This adaption is handled in various ways in various situations and depends on the mathematics of
the algorithms.

• Intelligent Systems: Intelligent Systems are the system which are provided with artificial intelligence and are
capable of processing data and take decisions intelligently. Unlike Expert Systems they can handle unknown
environments and unknown events based on their decision-making capability.

• Expert Systems: Expert System is one kind of Intelligent System which is more likely intelligent pattern
matching system capable of making decisions based on the data that is fed as a part of the training. The
expertise, intelligence and decision making is done on the basis of feed data only. These kinds of systems will
not be able to handle unknown data. Intelligence lies only for the known environment or the set of known
events.

• Example can be a bio-metric system. Expert System is subjected to an unlimited number of external
feedbacks as inputs out of which it can only process correctly only the known ones. As for identification of a
person, there are several characteristics which are unique. But for big databases, a collection of such
characters can be used as unique identification characteristics.
• Inference Systems: Intelligence or Decision Making in Inference Systems is
based on conclusive decisions made out of rational thinking and not on logic.
• But should we call it non-logic decision making?
• In Inference System, the control statements are cognitive statements which may or
may not be represented mathematically and thus very difficult to process for
computation.
• The control statements arise out of rational thinking, and the basic meaning of the
statement is not important, but its implications are important for articulating
meaningful relationships.
• Adaptive Learning
• Adaptive learning is another process of learning based models for optimization and control based
systems.
• During search process for system optimization, there is a requirement for adaptively learning and
responding to the requirement of step variation and identify the regions where there is more
possibility occurs of finding optimal or near optimal solutions.
• This kind of response is obtained through a feedback generated by a mathematical representation
whose response is adaptive and based on the parameters and evaluation generated.
• Heuristics
• Heuristics can be defined as a cognitive search based algorithm for quick
optimization or near-optimized solution which cannot solve with the
conventional mathematical tools and exhaustive search techniques.
• Such systems or the problems cannot be represented continuous gradient
based mathematical representation or may have multiple local peaks for
minima or maxima.
• These are like non-convex problems like multi-modal problems, multi-local
Optima problems, discontinuous equation based problems, etc. and are
difficult to solve them analytically sometimes in-feasible and impractical.
• The solutions are measured with the help of an evaluation function or called
heuristic function, which is convenient in many cases, but in the majority of
the systems and their modeling (like real life and real time), it is very
difficult to achieve and formulate that function.
• Meta-Heuristics: Meta-Heuristics is a semi-randomized search algorithm where the optimal solution of a
problem is derived out through the cooperation of agents with the help of deterministic and stochastic
operations which constitute the algorithm.
• The joint acknowledgment of deterministic and stochastic operations engages the algorithm in both
exploration and exploitation simultaneously.

• Hyper-Heuristics: Hyper-Heuristics is an automated search scheme based algorithm where the operations
are performed based on the previously learned experience.
• It is a cooperate approach of search and machine learning where the search is driven by experience gathered
by learning models.
• In Hyper-Heuristics, the search is accompanied by intelligent decision making of the choice of parametric
variations and operations in the search space.
• Multi-Agent Coordination:
• Multi-Agent Coordination is the phenomenon by which a group of agents works
simultaneously under the influence or reference of its neighboring agents or in
some cases one or more agents may be selected as leader agents from a pool.
• The aim of such coordination is accomplishment of a part of the whole work. The
motive of this kind of coordinated work thus lies in providing efficiency of the
system with respect to time, energy, duplicity, computational power, the cost of
decisions and complexity of the overall system.
• Multi-Agent Learning:
• Learning from experience and exploration in an agent-based
coordinate system is known as Multi-Agent Learning.
• Each of agents may work independently or under the influence and
coordination of other co-agents.
• Learning in multi-agent system can be shared or unshared and depends
on how similar the agents are and how far the learning of one agent is
going to help the other.
• In leader based system the data accumulated by the joint effort of the
agents are centrally processed to derive some learning based
experience and the processed outcome is shared with the agents.
Particle Swarm Optimization (PSO)

• Particle Swarm Optimization (PSO) is the pioneer of swarm intelligence algorithms, which utilizes
collective effort for achieving better convergence for optimization. It was introduced by J.
Kennedy and R. Eberhart in the year 1995.
• The two founders came from diverse backgrounds and they had been cooperating for socio-
psychological simulation purposes and the outcome is one of the finest algorithms for optimization
and problem solving.
• The Particle Swarm Optimization algorithm mainly works on the basis of two factors: one is
collective exploration and the second behaviour is the social guidance based influence of the co-
particles for decision making of the other particle.
• The particles perform a search based on the evaluation function taking into consideration the
topological information and share their knowledge and success evaluation with the other particles
iteratively before they, as a whole or individually, converge to the optimum solution.
• The social influence of the Particle Swarm Optimization is being formulated through some
mathematical foundations, which constitute and represent the social behaviour and the mode of
decision making for the particles for subsequent iterations based on the present situation
• The two main equations which govern the agents of Particle Swarm Optimization
and represent the intelligent collective behaviour are given as follows:

• We can represent as where (Solution set) and is the dimension of the solution.
• Velocity component as vi ∀xi and is interpreted as the variation component for the x i axis
and vi can be calculated as

• Where represents the current iteration or rather the t th iteration.
• The two equations are iterative and require some memory as they utilize stored values
from the previous iteration.
• r1 and r2 as random factors uniformly distributed in the range [0, 1], where we can include
or exclude the value zero.
• But theoretically it is justified to include or exclude the social influence of the fellow best
particles.
• Also, c1 and c2 are the two acceleration coefficients that scale the influence of the iteration
best and the global best positions.
• Here w is an inertia weight scaling the previous time step velocity factor that must be
added to get the new position of the particle.
• This inertia weight w can also be varied, combined with a momentum or even adaptively
increased or decreased.
• The difference between ibest and pbest is that ibest is the best solution for the previously
completed solution pool (which may or may not be the global best but sometimes might be
when a new global solution is found and updated) while p best is the best solution visited by
that particular particle (a very less chance exists here that p bestbe the global best as well).
• Also, pbest will be different for different particles while i best will be same for all the individuals.
• At each iteration, each of the PSO particle stores in its memory the best solution visited so far,
called pbest, and experiences an attraction towards this solution as it traverses through the
solution search space.
• As a result of the particle-to-particle information, the particle stores in its memory the best
solution visited by any particle, and experiences an attraction towards this solution, called
gbest, as well.
• The first and second factors are called cognitive (p best) and social (gbest) components,
respectively.
• After each iteration, the values of p best and gbest are updated for each particle; in case a better or
more dominating solution (in terms of fitness evaluator) is found.
Variants of Particle Swarm Optimization Algorithm
• Sequential Particle Swarm Optimization Algorithm

• If we have a varying or moving modal search space where the global minima is constantly
moving, we cannot start the search from the beginning. At the same time, the search process
and exploration must continue.

• Sequential Particle Swarm Optimization Algorithm is one such variant of PSO that has been
implemented for tracking objects in a medium (image or video).

• Traditional Particle Swarm Optimization Algorithm always starts from the initialization as it
preserves some global and personal (iteration) best. However, as the search space has
changed, global and personal (iteration) best values become vague, and there is a requirement
of reinventing the situation.
• Sequential Particle Swarm Optimization Algorithm is a variant of Particle Swarm
Optimization Algorithm mainly introduced to solve a particular application of visual tracking.
• In visual tracking, the object is in motion and hence the search environment changes with
time. In this kind of random and dynamic problem, the optimization in local optima requires
special care and is very difficult to handle with the traditional Particle Swarm Optimization
Algorithm.
• Hence, the modified Sequential Particle Swarm Optimization Algorithm is introduced
with special operations to take care of the changing environment.
• The main idea behind the introduction of this variant is to encourage the involvement in more
variations, diversities and learning factors based on the measured experiences for the search
space involving the progress of the search taken into consideration.
• The previously found best individuals on optimization are initially randomly propagated to
enhance their diversities, and this is followed by the operations of Sequential Particle Swarm
Optimization Algorithm with Adaptive parameters constantly tuned on iteration and co-
agents.
• The stopping criterion is evaluated to check whether the algorithmic
iteration convergence criterion is met based on the new operation of an
efficient convergence criterion.
• The three main operational steps of the Sequential Particle Swarm
Optimization Algorithm framework are as follows:
1. Random Propagation,
2. Initialization of Adaptive PSO, and
3. Convergence Criterion Check.
Inertia Weight Strategies in Particle Swarm Optimization Algorithm
Inertia Weight Strategies have targeted the weighted value that gets multiplied as inertia
with the present velocity value.
Variation Inertia Weight w in the velocity equation of the traditional Particle Swarm
Optimization Algorithm remains static most of the time. If it is very less, exploration is
hindered, and if it is large then local search is hampered.
Hence different mathematical expressions have been tried to evolve some changing
adaptive features into the overall equation system.
Way of introducing adaptive and non-uniform variations for the particles of Particle Swarm Optimization
Algorithm.

• Constant Inertia Weight

• The most simplified and non-adaptive mode is the use of a constant value of w i for any iteration ‘i’.

• wi = c , where c is the constant which needs to be initiated, and the best c can be
determined only through experimentation.

• Random Inertia Weight

Where, a and b are constants and and this scheme has randomly generated w i and is an opportunistic
scheme.
• Adaptive Inertia Weight

(2.12)

• (2.13)

• In Adaptive Inertia Weight, wi tends to change depending on the progress of the search and on some factors
which provide information regarding the change in the experience and age of the search agents.
• Sigmoid Increasing Inertia Weight

• where

• (2.15)

• In Sigmoid Increasing Inertia Weight, it can be seen that the numerical value inertial weight w i gradually
increases with each iteration, especially with a change in the time factor.
• The Chaotic Inertia Weight

• Where,

• In the Chaotic Inertia Weight, the value of the inertia w i varies non-linearly and fluctuates without any
concrete rule. However, as it follows a definite mathematical expression and some numerical parameter
values hence it is denoted as chaotic and is called as The Chaotic Inertia Weight.
• Chaotic Random Inertia Weight

• Where,

• Unlike the Chaotic Inertia Weight, Chaotic Random Inertia Weight is also chaotic and is not random, but the
unique feature of Chaotic Random Inertia Weight is that it depends on some random numerical value,
generated by some random generators. Hence,w i in Chaotic Random Inertia Weight is more fluctuating than
the previously discussed Chaotic Inertia Weight.
• Oscillating Inertia Weight

• Where,

• In Oscillating Inertia Weight, the value of inertia wi is oscillating and depends on some definite
fluctuation like damping or reverse damping or constant oscillating function like sine or cosine
function.
Ant Colony Optimization
Ant colony optimization (ACO) is an agent-based cooperative search
meta-heuristic algorithm, in which each agent seeks its own
independent solution under the influence of pseudo communication
with social or global fellow agents.

ACO was initially proposed by Marco Dorigo in 1992 in his PhD thesis as
an ant system, which imitates the habit of ants in a food search-cum-
procurement and helping the fellow ants to the food source.

There are a few types of variants of ACO, which are prevalent and have
unique mathematical features to enhance the performance of the
algorithm and at the same time balance between exploration and
exploitation.
Advantage: ACO is such an algorithm which is perhaps suitable for any kind of
application.
Disadvantage: ACO does not possess any operator, which can vary a variable
parameter through a continuous space. This makes the algorithm cripple at
situations in which there is requirement of continuous search.
Another drawback is encountered during the path planning or adaptive sequence
generation in which the solution is the sequence of events (or states or nodes), the
number of which is not fixed. If ACO is implemented for such kind of applications
then there is a possibility that proper sequence as valid solution can be created,
but chances are there that a loop is formed. In such case, the agent is wasted.
Some drawbacks have been overcome by hybridization with other nature-inspired
algorithms.
Biological Inspiration:
• It is a well-known fact that ants, wasps, and so on are known to show
some of the most complex social behavior ever found in the insect
world.
• Their behavior is dictated by the collectiveness of their communities to
achieve as well as sustain vital tasks as defense, feeding, reproduction,
and so on.
• Though complete understanding of ant colonies is near to impossible,
yet there are some aspects that are well known to the researchers and
that have proven to be consistent.
• These aspects include the following:
1. Competition: Ants live in small to large-sized communities within the
environment. This environment in which they live affects their daily
functioning and the tasks they perform
2 High Availability: Gordon has conducted a lot of experiments to study
the behavior of the ants, and his experiments have shown that ants
may also switch tasks
There are four different tasks that an ant can perform:
1. Foraging
2. Patrolling
3. Maintenance work
4. Middle work
3. Brownian Motion:
In their natural habitat, ants are continuously inundated with stimuli of
all kinds.
These stimuli are presented to them from all around, and a great part
of their sensorial system is dedicated to pick up only the most
important stimuli.
These stimuli include heat, magnetism, light, pressure, gravity, and
chemicals.
It can be safely asserted that under the influence of any stimuli (except
gravity), ants tend to show a random pattern of locomotion.
If a stimulus provides a particular orientation to this locomotion, then
its absence does not remove such pattern; instead, it provides another
random pattern of locomotion.
• Ants remain very determined when they are able to sense a
pheromone or food around.
• Even when a pheromone trail is obstructed, they will still find the way
to continue their trail.
• In the real world, the sequence of movements performed depends on
the task at hand. For example, when the ants are patrolling, they
show a locomotion pattern, which is similar to Brownian movement.
• Similarly, maintenance and middle workers usually do not go very far
from the nest, whereas the foragers leave the nest usually in a
straight line.
Brownian motion shown by the
ants
Pheromones and Foraging
• Pheromone is a generic name and refers to any endogenous chemical substance that is
secreted by an insect. Insects secrete this chemical in order to incite reaction in other
organisms of the same species.
• Insects use pheromones for diverse tasks that include reproduction, identification,
navigation, and aggregation.
• The antenna found on the ant body is the agent responsible for detecting pheromones.
• The chemoreceptors present on the antenna trigger a biological reaction when the sum
total of all action potentials exceeds a given threshold.
• Different species of ants have different strategies for foraging.
• Most species have some selected ants who have been assigned the task of scanning the
environment around for food.
• When they leave their nest, they usually take one direction and explore the land in this
direction.
• A pheromone trail is left behind them and this trail is used by other ants if the foraging
ant happens to succeed in finding the food.
Flowchart of ant colony
optimization algorithm
Cuckoo Search Algorithm
• CS algorithm proposed by Deb and Yang in 2009.
• Competent and capable in dealing with problems that need to be globally optimized.
• The algorithm is rooted on a reproduction technique of cuckoo birds. These birds
never make nests of their own. Rather, they rely on other host bird’s nest for laying
their eggs.
• Cuckoo eggs mimic physical characteristic of host bird’s eggs in context of spot and
color. If their strategy of hiding their eggs is unsuccessful, that is, the host bird
discovers cuckoo’s eggs, there are two possibilities. First, it can dump the eggs away
or second, find some other place after leaving the nests.
• The nests are initialized randomly for laying eggs. Each nest contains an egg n. The
number of eggs in each nest is one in a simpler one-dimensional problem but
increases according to the change in dimensions. Eggs having better fitness are
passed to the next generation; on the other end, eggs having low fitness are
replaced with the probability pa .
• The prominent features that distinguish CS from other algorithms is
the concept of levy flight.
• Analogous to the random walk CS uses a complex levy flight random
walk strategy that places CS ahead of other optimization techniques.
• A search by Frye and Reynolds shows that fruit flies or Drosophila
melanogaster, unearth the area of exploration using a series of
straight flight paths followed by a rapid turn of 90°, leading to a free
search pattern of a levy-flight style.
Levy’s Flight
Traditional Cuckoo Search
Optimization Algorithm
• CS algorithm can ideally be described by considering these three
rules:
1. Each cuckoo lays eggs in randomly chosen nests.
2. High-quality eggs contained in some nests are passed over to the
next generation.
3. The host nests is limited to some fixed number, with a possibility pa
ϵ [0,1] that eggs are discovered by host and those can either be
discarded or may result in generation of a new nest
• CS can be described algorithmically as follows:
• 1. Generate a population of n eggs randomly.
• 2. Get a cuckoo by levy flight random walk strategy.
• 3. Evaluate the fitness of the randomly chosen cuckoo.
• 4. Compare the fitness of a new solution with an old solution. Keep the
best eggs as new solution.
• 5. Fractional worst eggs are abandoned with a probability Pa and
generate new ones through the levy flight.
• 6. Keep the best solutions
• 7. Repeat steps (2)–(6) till the iteration exhausts.
• 8. The best nest with best objective function is determined.
Variants of Cuckoo Search
Algorithm
• Modified Cuckoo Search
• Improved Cuckoo Search Algorithm with Adaptive Method
Modified Cuckoo Search
• The MCS algorithm proposed by S.Watson et al. is implemented on gait database.
• The basic steps involved in the MCS algorithm are as follows:
• 1. Generate initial population: Randomly generate a population of n eggs. In the
binary version of this algorithm, search space can be analyzed as n-dimensional
Boolean lattice, in which the best fitness lies across the corners of a hypercube.
• 2. Initialize the best fit and global fit to a minimum value.
• 3. Evaluate population: At every iteration, the population is evaluated for best
solution. After that, the population is scaled, so as to get the maximum fitness
value by selecting the minimum number of features.
• 4. Sort eggs by the order of fitness.
• 5. Replace the worst eggs with new solutions. Here, the step size A ← A/√G, where
G: current iteration number.
• 6. Crossover of top eggs is performed, so as to converge quickly to the best
solution.
• 7. Repeat steps (4)–(6) till condition exhausts
• The unique feature of CS is to discard the worst eggs and cross-
breeding of elite eggs is to produce an efficient solution that
separates it from other meta-heuristic approaches.
• Implementation of levy flight rather than standard random walk
techniques is advantageous.
• As levy distribution has infinite variance and mean, MCS can see the
sights or the space more efficiently when matched with other
approaches.
• Yang et al. proved mathematically that CS converges globally more
efficiently
Improved Cuckoo Search
Algorithm with Adaptive Method
• Though CS algorithm has the ability to always get to the optimal solution, still the
improvement of CS involves improving the convergence rate of the algorithm.
• In a traditional CS algorithm, levy flying step length is a constant, that is, α is a constant, which,
in practice, reduces the adaptive method in an engineering problem.
• Zhang and Chen in 2014 proposed an improvement on the traditional CS algorithm, which
involves a change in the value of α, for every iteration.
• Thus, the improved CS algorithm uses an adaptive method for calculating the step-size scaling
parameter (α) and is defined as

• where αmax and αmin signify the maximum and the minimum of step length, respectively,
and N is the iteration times of this method with initial value 1, Nmax is the maximum of N.
• The step size controls the contribution of the levy flight in the solution
update phase.
• A high value of step size would mean that the solution in (i + 1)th
iteration is primarily governed by the levy flight distance (modeled as
step length). As a result, a greater emphasis is placed on exploration.
• On the other hand, a small value of step size would mean that the
levy flight has a little say in the solution update phase; this essentially
favors exploitation.
• Thus, step size can bring about a holistic trade-off between
exploration and exploitation to ensure that the algorithm converges
to an optimal solution.
QUESTIONS
• Given that the step size α is bounded in the closed interval [0.03,2.50]
and maximum number of iterations allowed Nmax = 6000, what is
the expected value of step-size parameter after 3000 iterations?
• How can ACO algorithm be used for solving TSP problem?
• How should be a good ACO strategy in terms of local and global
search?
• Discuss the inherent limitations of ACO algorithm. What steps could
be taken to compensate them?
• What are the limitations and drawbacks of GA?
Artificial Bee Colony
• Swarm intelligence algorithms works on two fundamental concepts,
which are necessary and sufficient for exhibiting their properties, self-
organization, and division of labor.
• The honey bee swarm has essentially three components:
• (1) food sources,
• (2) employed foragers, and
• (3) unemployed foragers.
• Food sources: Food source value depends on factors such as its richness,
concentration of energy, proximity to the nest, or ease of extraction of
energy.
• Employed foragers: Employed foragers are associated with a food source
and they are tasked to exploit that source. They transmit with them the
information about their employed source, its position from the nest, and
the profitability of that source, and then share this information with
others with some probability.
• Unemployed foragers: Unemployed foragers look out for food sources
continuously.
• There are two types of forager bees: (1) scouts and (2) onlookers.
• Scout bees search their environment continuously in order to find new
food sources, whereas onlookers wait at the nest establishing food
sources from the information that is transmitted by employed foragers.
• The information exchange among bees results in the important
phenomena of collective knowledge.
• Some parts of the hive can be observed to be common for the entire
hive.
• Among these parts, the most important area of the hive from the
perspective of information exchange can be seen as the dancing area.
• Dancing area is the place where the bees communicate among them
about the quality of the food sources. The dance is referred as waggle
dance.
• The dance floor provides the information about all the rich food
sources to the onlookers, where they can watch numerous dances
and then decide which among them would be the most profitable
food source.
• The most profitable sources have a high probability of getting
selected by the onlookers because more information is circulated
about these food sources during the dance.
• The information among the employed foragers is shared with a
probability, which directly depends on the profit factor of the food
source, and more profit results in more sharing of its information
during the waggle dance.
• Hence, the recruitment of the food source is also directly
proportional to the food source’s profitability.
• Positive feedback: The number of onlookers visit increases with the
nectar amount in the food sources.
• Negative feedback: When the food source quality becomes poor,
the bees stop their exploitation.
• Fluctuations: New food sources are continuously discovered by scout
bees using the random search process.
• Multiple interactions: In the dance area, the bees share information
about their food sources with the other bees.
• When a food source is located by a bee, the bee uses its capability to
memorize the position of the source and then starts exploiting it
immediately, changing its status to an employed forager.
• The foraging bee then collects nectar from the source, returns to the
hives, and then deposits the nectar to the food store.
Flowchart
Bat Algorithm
• Various approaches of optimization approaches have been proposed:
• 1. Classical methods meant for problems with the same origin values that follow the
same path and always get to the same final destination.
• 2. Stochastic algorithms are the algorithms based on randomization. As such, they
may give different solutions in different instances of their run.
• 3. Population-based algorithms deal with a set of solutions that try to improve on
themselves with increasing iterations of the algorithms.

• The BA is a population-based meta-heuristic approach proposed by Xin-She Yang. The


algorithm manifests the echolocation of the bats. The bats make use of SONAR
echoes to detect and avoid obstacles that might be present in their path. Thus the
that bats navigate by using the time delay from emission to reflection.
• The pulse rate is measured to be between 10 and 20 times per
second, and it only remains existent for about 8–10 ms.
• Once the bats get hit, they transform their own pulse into useful
information to bring about an exploration.
• The purpose of this exploration mechanism is to find out how far
away the prey is?
• The bats usually make use of wavelengths λ that are varying in the
range of 0.7–17 mm.
• The pulse rate can be bounded in the range from 0 to 1, where 0
means that there is no emission and 1 means that the bat’s emission
is at the maximum.
• Following are the assumptions made regarding the behavior of the
bats:
• 1. They use echolocation phenomenon to sense the distance, and that
they are also able to distinguish between a prey and a potential
barrier.
• 2. Bats fly randomly with a velocity vi at position xi with a fixed
frequency fmin, varying wavelength λ, and loudness A0 to search for
prey, and that they can automatically adjust the wavelength of their
emitted pulses as well as the rate of pulse emission.
• Steps of the Algorithm
• 1. The algorithm begins with an initialization (lines 1–3) step, in which we
initialize the parameters of algorithm, generate, and also evaluate the
initial population. In this step, we also determine the best candidate
xbest.
• 2. Generation of new solutions (line 6). This step witnesses the artificial
bats moving within the space as per the update rules of the algorithm.
• 3. Local search step (step 7–9). Random walks improve on the value of
the best solution.
• 4. Evaluation of the new solution.
• 5. Conditional save of the best solution (step 12–14), conditional
archiving of the best solution takes place in this step.
• 6. Finding the best solution (step 15), the best solution of the current
iteration is updated.
• The following assumptions called the ideal assumptions are put in
perspective:
• 1. In order to sense distances, bats usually resort to echolocation and
they also have the wisdom to differentiate between a potential prey
and a barrier.
• 2. From any given position xi , the bats move randomly with a velocity
vi . The minimum frequency of emitted waves is a fixed frequency
fmin. However, the wavelength can vary, and so the loudness A0. In
addition to this, it is also assumed that the bats can tune the
wavelength of their emitted pulses and tweak the rate of pulse
emission r as well depending on the positioning of the barrier/target.
• 3. Loudness is bounded between a positive large value A0 and a
minimum constant value Amin.
• The better performance of BA as compared to other SI Algorithms is
attributed to the following:
• 1. BA uses echolocation and frequency tuning to bring about
optimization. Though these phenomena are not directly mimicked,
still they go a long way in providing extra functionalities to the
algorithm.
• 2. Automatic zooming: By virtue of this property, BA has a distinct
capability to automatically zoom into a region simulating exploitation
in the process. This, in addition, is supplemented by an automatic
switch from exploration-to-exploitation. As a consequence, BA
converges quickly.
• 3. BA, in contrast to conventional swarm intelligence algorithms, uses
parameter control. This enables the algorithm to vary the parameter
values as the iterations proceed.
Flower Pollination Algorithm
• Real-world design problems found in the engineering domain are
usually multiobjective problems, that is, there is more than one
objective that needs to be accomplished at the same time.
• The trouble lies in the fact that in most of the cases, these objectives
go againstone another. The solution in these cases is to come to a
point where a perfect/optimal balance could be struck between these
objectives.
• Multiobjective problems have additional challenges such as an
increased time complexity, in homogeneity and high dimensionality.
• Flower pollination algorithm, a nature-inspired algorithm motivated
by how pollination takes place in flowers.
• Pollination is defined as the process of transfer and consequent
deposition of pollen grains into the stigma of the flower.
• For this transfer to actually take place, an agent is required. This agent
facilitates the motion of pollen grains that would otherwise have been
very restricted.
• Pollination can be divided into the following classes:
1. Self-pollination
2. Cross-pollination
• Self-Pollination
• This is an intraplant behavior where the pollen of one flower either
pollinates the same flower or the flower present on the same plant.
This intra behavior makes it self.
• However, this can take place in only those plants whose flowers
contain both the male as well as the female gametes.
• Cross-Pollination
• This, in contrast to self-pollination is an interplant phenomenon.
• In this case, the pollen grains are transferred from the flower of one plant
to the flower of another plant.
• Various agents such as insects, birds, animals, water, and so on aid the
process.
• The process is called abiotic when the agents are nonliving; for example,
wind and so on and if living agents such as insects, bees, beetles, and so on
are involved, then the process becomes biotic pollination.
• Insect pollination can easily be observed for the flowers that have colored
petals and a strong odor. The insects in these circumstances are drawn
toward these flowers due to abundance of nectar, edible pollens, and so
on. When the insect sits on a flower, the pollen grains stick to their body
and when the same insect happens to sit on another flower, the pollen
grains get dropped.
• Flower Pollination Algorithm
• Xin-She Yang developed the flower pollination algorithm; this
algorithm as stated earlier is inspired by the pollination phenomenon
that takes place in flowers.
• This algorithm was also extended to solve multiobjective problems.
• To understand the algorithm in simple terms, following points need to
be kept in mind:
• • Global pollination can be achieved by biotic as well as cross-
pollination and the pollinator (the agents) is required to obey the levy
flight.
• • Similarly, in order to implement local pollination, abiotic and self-
pollination are preferred.
• Different insects have different flower constancy; this is manifested as
a reproduction probability.
• This probability term remains proportional to the similarity that the
two flowers have with one another.
• • A term, switch probability, is defined that will enable the algorithm
to make a switch from local to global pollination and vice versa.
• However, given a choice, local pollination is always preferred.
• If new solutions are better than the original ones, update them in the
population.
• Repeat step 4 for each flower F.
• Find the current best solution g.
• Repeat step 4 to 6 for T iterations.
• The global best solution at the end of T iterations is the best one as
generated by FPA
Multiobjective Flower
Pollination Algorithm
• The multiobjective flower pollination algorithm (MFPA) is a multiobjective
extension of the flower pollination algorithm (single-objective case).
• This algorithm was proposed by Yang with a view of solving multiobjective
problems as well.
• The obvious difference between single-objective flower pollination
algorithm (SFPA) and MFPA is that the former lacks effective search
capability as it is restricted to single-objective problems only.
• The later, on the other hand, can solve multiobjective optimization problems
as well.
• A number of varied approaches have been proposed to transform a
multiobjective problem into a single-objective problem.
• A simple such transformation is the use of weighted sum to integrate
multiple objectives into a composite single objective.
m represents the number of objectives, whereas wi represent
nonnegative weights. The random weights wi are usually drawn from
a uniform distribution

You might also like