Optimization Algorithms: Swarm Intelligence
Optimization Algorithms: Swarm Intelligence
Swarm intelligence
Swarm intelligence has become a research interest to many research scientists of related fields in recent
years. Bonabeau has defined the swarm intelligence as any attempt to design algorithms or distributed
problem-solving devices inspired by the collective behavior of social insect colonies and other animal
societies.
The classical example of a swarm is bees swarming around their hive; nevertheless the metaphor can
easily be extended to other systems with a similar architecture. An ant colony can be thought of as a
swarm whose individual agents are ants. Similarly a flock of birds is a swarm of birds.
Two fundamental concepts, self-organization and division of labor, are necessary and sufficient
properties to obtain swarm intelligent behavior such as distributed problem-solving systems that selforganize and adapt to the given environment:
Other ants follow one of the paths at random, also laying pheromone trails. Since the ants on
the shortest path lay pheromone trails faster, this path gets reinforced with more pheromone,
making it more appealing to future ants.
The ants become increasingly likely to follow the shortest path since it is constantly reinforced
with a larger amount of pheromones. The pheromone trails of the longer paths evaporate.
Algorithm
Paradigm for optimization problem is expressed as finding short paths in a graph.
Scheme:
Ant walk:
Each ant maintains a tabu list of infeasible transitions for that iteration
Update attractiveness of an edge according to the number of ants that pass through
Pheromone update
pbest[] and gbest[] are defined as stated before. rand () is a random number between (0,1). c1, c2 are
learning factors. usually c1 = c2 = 2.
The pseudo code of the procedure is as follows:
For each particle
Initialize particle
END
Do
For each particle
Calculate fitness value
If the fitness value is better than the best fitness value (pBest) in history
set current value as the new pBest
End
Choose the particle with the best fitness value of all the particles as the gBest
For each particle
Calculate particle velocity according equation (a)
Update particle position according equation (b)
End
While maximum iterations or minimum error criteria is not attained
BAT Algorithm
The Bat Algorithm is based on the echolocation behavior of bats. The capability of echolocation of
microbats is fascinating as these bats can nd their prey and discriminate dierent types of insects even
in complete darkness.
Most microbats are insectivores. Microbats use a type of sonar, called, echolocation, to detect prey,
avoid obstacles, and locate their roosting crevices in the dark. These bats emit a very loud sound pulse
and listen for the echo that bounces back from the surrounding objects. Each ultrasonic burst may last
typically 5 to 20 ms, and microbats emit about 10 to 20 such sound bursts every second. When hunting
for prey, the rate of pulse emission can be sped up to about 200 pulses per second when they y near
their prey. Such short sound bursts imply the fantastic ability of the signal processing power of bats.
As the speed of sound in air is typically v = 340 m/s, the wavelength of the ultrasonic sound bursts with
a constant frequency f is given by
= v/f
(1)
which is in the range of 2mm to 14mm for the typical frequency range from 25kHz to 150 kHz. Such
wavelengths are in the same order of their prey sizes.
Amazingly, the emitted pulse could be as loud as 110 dB, and, fortunately, they are in the ultrasonic
region. The loudness also varies from the loudest when searching for prey and to a quieter base when
homing towards the prey. The travelling range of such short pulses are typically a few metres,
depending on the actual frequencies.
Such echolocation behavior of microbats can be formulated in such a way that it can be associated with
the objective function to be optimized, and this makes it possible to formulate new optimization
algorithms.
If we idealize some of the echolocation characteristics of microbats, we can develop various bat-inspired
algorithms or bat algorithms. For simplicity, we now use the following approximate or idealized rules:
1. All bats use echolocation to sense distance, and they also know the dierence between food/prey
and background barriers in some magical way;
2. Bats y randomly with velocity vi at position xi with a xed frequency fmin, varying wavelength and
loudness A0 to search for prey. They can automatically adjust the wavelength (or frequency) of their
emitted pulses and adjust the rate of pulse emission r [0, 1], depending on the proximity of their
target;
3. Although the loudness can vary in many ways, we assume that the loudness varies from a large
(positive) A0 to a minimum constant value Amin.
Pseudo code of the bat algorithm (BA)Objective function f(x), x = (x1, ..., xd)T
Initialize the bat population xi (i = 1, 2, ..., n) and vi
Furthermore, the loudness Ai and the rate ri of pulse emission have to be updated accordingly as the
iterations proceed. As the loudness usually decreases once a bat has found its prey, while the rate of
pulse emission increases, the loudness can be chosen as any value of convenience. For example, we can
use A0 = 100 and Amin = 1. For simplicity, we can also use A0 = 1 and Amin = 0, assuming Amin = 0
means that a bat has just found the prey and temporarily stop emitting any sound. Now we have
At+1i = Ati ,
rt+1i = r0i *1 exp(t)+,
(6)
where and are constants. In fact, is similar to the cooling factor of a cooling schedule in the
simulated annealing *9+. For any 0 < < 1 and > 0, we have
Ati 0,
rti r0i , as t .
(7)
In the simplicity case, we can use = , and we have used = = 0.9 in our simulations. The choice of
parameters requires some experimenting. Initially, each bat should have dierent values of loudness
and pulse emission rate, and this can be achieved by randomization. For example, the initial loudness A0i
[0, 1] if using (6). Their loudness and emission rates will be updated only if the new solutions are
improved, which means that these bats are moving towards the optimal solution.
And rdi(t +1) is the glowwormi s local-decision range at the t +1 iteration, rs is the sensor range, nt is the
neighborhood threshold, the parameter affects the rate of change of the neighborhood range.
The number of glow in local-decision range:
(2)
and, xj(t) is the glowworm i s position at the t iteration, lj(t) is the glowworm i s luciferin at the t
iteration.; the set of neighbors of glowworm i consists of those glowworms that have a relatively higher
luciferin value and that are located within a dynamic decision domain whose range rdi is bounded above
by a circular sensor range rs (0 < rdi < rs ).Each glowworm i selects a neighbour j with a probability pij(t)
and moves toward it. These movements that are based only on local information, enable the
glowworms to partition into disjoint subgroups, exhibit a simultaneous taxis-behaviour toward and
eventually co-locate at the multiple optima of the given objective function.
(3)
Movement update:
(4)
Luciferin-update:
(5)
and li(t) is a luciferin value of glowworm i at the t iteration, (0,1) leads to the reflection of the
cumulative goodness of the path followed by the glowworms in their current luciferin values, the
parameter only scales the function fitness values, J( xi(t)) is the value of test function.