Fireworks Algorithm For Op Tim Ization
Fireworks Algorithm For Op Tim Ization
net/publication/220704568
CITATIONS READS
987 10,988
2 authors, including:
Ying Tan
Peking University
333 PUBLICATIONS 7,980 CITATIONS
SEE PROFILE
All content following this page was uploaded by Ying Tan on 16 May 2014.
1 Introduction
In recent years, Swarm Intelligence (SI) has become popular among researchers
working on optimization problems all over the world [1,2]. SI algorithms, e.g. Par-
ticle Swarm Optimization (PSO) [3], Ant System [4], Clonal Selection Algorithm
[5], and Swarm Robots [6], etc., have advantages in solving many optimization
problems. Among all the SI algorithms, PSO is one of the most popular algorithm
for searching optimal locations in a D-dimensional space. In 1995, Kennedy and
Eberhart proposed PSO as a powerful global optimization algorithm inspired by
the behavior of bird blocks [3]. Since then, the PSO has attracted the attentions
of researchers around the globe, and a number of variants of PSO have been con-
tinually proposed [7, 8].
Like PSO, most of swarm intelligence algorithms are inspired by some intelli-
gent colony behaviors in nature. In this paper, inspired by the emergent swarm
behavior of fireworks, a novel swarm intelligence algorithm called Fireworks Al-
gorithm (FA) is proposed for function optimization. The FA is presented and
implemented by simulating the explosion process of fireworks. In the FA, two
explosion (search) processes are employed and mechanisms for keeping diversity
of sparks are also well designed. To validate the performance of the proposed
FA, comparison experiments were conducted on nine benchmark test functions
Y. Tan, Y. Shi, and K.C. Tan (Eds.): ICSI 2010, Part I, LNCS 6145, pp. 355–364, 2010.
c Springer-Verlag Berlin Heidelberg 2010
356 Y. Tan and Y.C. Zhu
among the FA, the Standard PSO (SPSO), and the Clonal PSO (CPSO) [8].
It is shown that the FA clearly outperforms the SPSO and the CPSO in both
optimization accuracy and convergence speed.
The remainder of this paper is organized as follows. Section 2 describes the
framework of the FA and introduces two types of search processes and mecha-
nisms for keeping diversity. In Section 3, experimental results are presented to
validate the performance of the FA. Section 4 concludes the paper.
2 Fireworks Algorithm
2.1 FA Framework
When a firework is set off, a shower of sparks will fill the local space around the
firework. In our opinion, the explosion process of a firework can be viewed as
a search in the local space around a specific point where the firework is set off
through the sparks generated in the explosion. When we are asked to find a point
xj satisfying f (xj ) = y, we can continually set off ‘fireworks’ in potential space
until one ‘spark’ targets or is fairly near the point xj . Mimicking the process of
setting off fireworks, a rough framework of the FA is depicted in Fig. 1.
In the FA, for each generation of explosion, we first select n locations, where
n fireworks are set off. Then after explosion, the locations of sparks are obtained
and evaluated. When the optimal location is found, the algorithm stops. Oth-
erwise, n other locations are selected from the current sparks and fireworks for
the next generation of explosion.
From Fig. 1, it can be seen that the success of the FA lies in a good design
of the explosion process and a proper method for selecting locations, which are
respectively elaborated in subsection 2.2 and subsection 2.3.
Optimal location
found Yes
No
Select n locations End
ymax − f (xi ) + ξ
si = m · n , (2)
i=1 (ymax − f (xi )) + ξ
where m is a parameter controlling the total number of sparks generated by
the n fireworks, ymax = max(f (xi )) (i = 1, 2, . . . , n) is the maximum (worst)
value of the objective function among the n fireworks, and ξ, which denotes the
smallest constant in the computer, is utilized to avoid zero-division-error.
To avoid overwhelming effects of splendid fireworks, bounds are defined for
si , which is shown in Eq. 3.
358 Y. Tan and Y.C. Zhu
⎧
⎨ round(a · m)
⎪ if si < am
ŝi = round(b · m) if si > bm , a < b < 1 , (3)
⎪
⎩
round(si ) otherwise
where a and b are const parameters.
Amplitude of Explosion. In contrast to the design of sparks number, the am-
plitude of a good firework explosion is smaller than that of a bad one. Amplitude
of explosion for each firework is defined as follows.
f (xi ) − ymin + ξ
Ai = Â · n , (4)
i=1 (f (xi ) − ymin ) + ξ
where K is the set of all current locations of both fireworks and sparks.
Then the selection probability of a location xi is defined as follows.
R(xi )
p(xi ) = . (7)
j∈K R(xj )
When calculating the distance, any distance measure can be utilized including
Manhattan distance, Euclidean distance, Angle-based distance, and so on [9].
When d(xi , xj ) is defined as | f (xi ) − f (xj ) |, the probability is equivalent to
the definition of the immune density based probability in Ref. [10].
2.4 Summary
Algorithm 3 summarizes the framework of the FA. During each explosion genera-
tion, two types of sparks are generated respectively according to Algorithm 1 and
Algorithm 2. For the first type, the number of sparks and explosion amplitude
depend on the quality of the corresponding firework (f (xi )). In the contrast, the
second type is generated using a Gaussian explosion process, which conducts
search in a local Gaussian space around a firework. After obtaining the locations
of the two types of sparks, n locations are selected for the next explosion gener-
ation. In the FA, approximate n + m + m̂ function evaluations are done in each
generation. Suppose the optimum of a function can be found in T generations,
then we can deduce that the complexity of the FA is O(T ∗ (n + m + m̂)).
360 Y. Tan and Y.C. Zhu
3 Experiments
3.1 Benchmark Functions
To investigate the performance of the proposed FA, we conducted experiments
on nine benchmark functions. The feasible bounds for all functions are set as
[−100, 100]D . The expression of the functions, initialization intervals and di-
mensionalities are listed in Table 1.
3.2 Comparison Experiments among the FA, the CPSO and the
SPSO
In this section, we compare the performance of the FA with the CPSO and the
SPSO in terms of both convergence speed and optimization accuracy.
Fireworks Algorithm for Optimization 361
Table 2. Statistical mean and standard deviation of solutions found by the FA, the
CPSO and the SPSO on nine benchmark functions over 20 independent runs
The parameters of both the CPSO and the SPSO are set as those in Ref. [8].
For the FA, the parameters were selected by some preliminary experiments. We
found that the FA worked quite well at the setting: n = 5, m = 50, a = 0.04,
b = 0.8, Â = 40, and m̂ = 5, which is applied in all the comparison experiments.
Table 2 depicts the optimization accuracy of the three algorithms on nine
benchmark functions, which are averaged over 20 independent runs. It can be
seen that the proposed FA clearly outperforms both the CPSO and SPSO on all
the functions. In addition, the FA can find optimal solutions on most benchmark
functions in less than 10000 function evaluations, as shown in Table 3. However,
the optimization accuracy of the CPSO and the SPSO is unacceptable within
10000 function evaluations.
Besides optimization accuracy, convergence speed is quite essential to an opti-
mizer. To validate the convergence speed of the FA, we conducted more thorough
experiments. Fig. 3 depicts the convergence curves of the FA, the CPSO and the
SPSO on eight benchmark functions averaged over 20 independent runs. From
these results, we can arrive at a conclusion that the proposed FA has a much
faster speed than the CPSO and the SPSO. From Table 3, we can find that the
FA can find excellent solutions with only 10000 times of function evaluations.
This also reflects the fast convergence speed of the proposed FA.
362 Y. Tan and Y.C. Zhu
4 9
x 10 x 10
5
FWA 8 FWA
4 CPSO CPSO
SPSO 6 SPSO
Best fitness
Best fitness
3
4
2
2
1
0 0
2 5
0 0
0 2000 4000 6000 8000 10000 0 2000 4000 6000 8000 10000
Function evaluations Function evaluations
(c) Rastrigin (d) Griewank
6 7
x 10 x 10
20
FWA 15
CPSO
15 SPSO
Best fitness
Best fitness
10
10
FWA
5 CPSO
5
SPSO
0 0
4
1 FWA
2 CPSO
SPSO
0 0
Fig. 3. Convergence curves of the FA, the CPSO and the SPSO on eight benchmark
functions. The function fitness are averaged over 20 independent runs.
Fireworks Algorithm for Optimization 363
Table 3. Statistical mean and standard deviation of solutions found by the FA, the
CPSO and the SPSO on nine benchmark functions over 20 independent runs of 10000
function evaluations
3.3 Discussion
As shown in the experiments, the FA has a faster convergence speed and a better
optimization accuracy, compared to the PSO. We consider the success of the FA
lies in the following two aspects.
– In the FA, sparks suffer the power of explosion from z dimensions simul-
taneously, and the z dimensions are randomly selected for each spark x̃i .
Thus, there is a probability that the differences between the firework and
the target location happen to lie in these z dimensions. In this scenario, the
sparks of the firework can move towards the target location from z directions
simultaneously, which endues the FA with a fast convergence speed.
– Two types of sparks are generated to keep the diversity of sparks, and the
specific selection process for locations is a mechanism for keeping diversity.
Therefore, the FA has the capability of avoiding premature convergence.
4 Conclusions
Mimicking the explosion process of fireworks, the so-called FA is proposed and
implemented for function optimization. The experiments among the FA, the
CPSO and the SPSO have shown that the proposed FA has a promising per-
formance. It clearly outperforms the CPSO and the SPSO on nine benchmark
364 Y. Tan and Y.C. Zhu
References
1. Garnier, S., Gautrais, J., Theraulaz, G.: The biological principles of swarm intelli-
gence. Swarm Intelligence 1(1), 3–31 (2007)
2. Das, S., Abraham, A., Konar, A.: Swarm intelligence algorithms in bioinformatics.
Studies in Computational Intelligence 94, 113–147 (2008)
3. Kennedy, J., Eberhart, R.C.: Particle swarm optimization. In: Proceedings of IEEE
International Conference on Neural Networks, vol. 4, pp. 1942–1948 (1995)
4. Dorigo, M., Maniezzo, V., Colorni, A.: Ant system: optimization by a colony of
cooperating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part
B: Cybernetics 26(1), 29–41 (1996)
5. De Castro, L.N., Von Zuben, F.J.: Learning and optimization using the
clonal selection principle. IEEE Transactions on Evolutionary Computation 6(3),
239–251 (2002)
6. Beni, G., Wang, J.: Swarm intelligence in cellular robotic systems. In: Proceedings
of NATO Advanced Workshop on Robots and Biological Systems (1989)
7. Bratton, D., Kennedy, J.: Defining a standard for particle swarm optimization. In:
Proceedings of IEEE Swarm Intelligence Symposium, pp. 120–127 (2007)
8. Tan, Y., Xiao, Z.M.: Clonal particle swarm optimization and its applications.
In: Proceedings of IEEE Congress on Evolutionary Computation, pp. 2303–2309
(2007)
9. Perlibakas, V.: Distance measures for PCA-based face recognition. Pattern Recog-
nition Letters 25(6), 711–724 (2004)
10. Lu, G., Tan, D., Zhao, H.: Improvement on regulating definition of antibody density
of immune algorithm. In: Proceedings of the 9th International Conference on Neural
Information Processing, vol. 5, pp. 2669–2672 (2002)