Multi-Objective Antlion Optimizer
Multi-Objective Antlion Optimizer
DOI 10.1007/s10489-016-0825-8
Seyedali Mirjalili
1 Introduction
In recent years, computers have become very popular in different fields for solving challenging problems. Computeraided design is a field that emphasizes the use of computers
in solving problems and designing systems. In the past,
the design process of a system would have required direct
human involvements. For instance, if a designer wanted to
find an optimal shape for a rocket, he would have to first
create a prototype and then use a wind tunnel to test. Obviously, such a design approach was very expensive and time
consuming. The more complex the system was, the more
time and cost the entire project required.
The invention of computers speeded up the design process significantly a couple of decades ago. This means that
people are now able to use computers to design a system
without even the need for a single prototype. As a result,
not only the cost but also the time of the design process is
substantially less than before. In spite of the fact that the
machine is now a great assistance, designing a system this
way still requires direct human involvements. This results
in a series of trial and errors where the designer tries to
design an efficient system. It is undeniable that a designer
is prone to mistakes, which makes the design process
unreliable.
Another revolutionary approach was the use of machine
to not only simulate a system but also design it. In this case,
a designer mostly set up the system and utilize a computer
S. Mirjalili et al.
2 Literature review
In single-objective optimization, there is only one solution as the global optimum. This is because of the unary
objective in single-objective problems and the existence of
one best solution. Comparison of solutions is easy when
considering one objective and is done by the relational
operators: >, , <, , or =. The nature of such problems allows optimization problems to conveniently compare
the candidate solutions and eventually find the best one.
In multi-objective problems, however, solutions should be
compared with more than one objective (criterion). Multiobjective optimization can be formulated as a minimization
problem as follows:
Minimize :
x ), f2 (
x ), . . . , fo (
x )}
F (
x ) = {f1 (
(2.1)
Subj ect to :
gi (
x ) 0,
i = 1, 2, . . . , m
(2.2)
hi (
x ) = 0,
i = 1, 2, . . . , p
(2.3)
L i x i Ui ,
i = 1, 2, . . . , n
(2.4)
where nis the number of variables, o is the number of objective functions, m is the number of inequality constraints, p
is the number of equality constraints, gi is the i-th inequality constraints, hi indicates the i-th equality constraints, and
[Li,Ui] are the boundaries of i-th variable.
Obviously, relational operators are no longer effective
for comparing solutions of a problem with multiple objectives. There should be other operators in this case. Without
the loss of generality, the four main definitions in multiobjective optimization (minimization) are as follows:
Definition 1 (Pareto Dominance) Assuming two vectors
such as: x = (x1 , x2 , . . . , xk ) and y = (y1 , y2 , . . . , yk ).
Vector x is said to dominate vector y (denote as x y) if
and only if:
i {1, 2, . . . , k} : fi (
x ) fi (
y ) i {1, 2, . . . , k} : fi (
x ) < fi (
y)
(2.5)
(2.6)
(2.7)
(2.8)
The general frameworks of all population-based multiobjective algorithms are almost identical. They start the
optimization process with multiple candidate solutions.
Such solutions are compared using the Pareto dominance
operator. In each step of optimization, the non-dominated
solutions are stored in a repository and the algorithm tries to
improve them in the next iteration(s). What make an algorithm different from another is the use of different methods
to enhance the non-dominated solutions.
Improving the non-dominated solutions using stochastic algorithms should be done in terms of two perspectives:
convergence (accuracy) and coverage (distribution) [32].
The former refers to the process of improving the accuracy of the non-dominated solutions. The ultimate goal is
to find approximations very close to the true Pareto optimal solutions. In the latter case, an algorithm should try
to improve the distribution of the non-dominated solutions
to cover the entire true Pareto optimal front. This is a
very important factor in a posteriori approaches, in which
a wide range of solutions should be found for decision
making.
The main challenge in multi-objective optimization using
stochastic algorithms is that the convergence and coverage are in conflict. If an algorithm only concentrates on
improving the accuracy of non-dominated solutions, the
coverage will be poor. By contrast, a mere consideration
of the coverage negatively impacts the accuracy of the
non-dominated solutions. Most of the current algorithms
periodically balance convergence and coverage to find very
accurate approximation of the Pareto optimal solutions with
uniform distribution along all objectives.
For convergence, normally, the main mechanism of convergence in the single-objective version of an algorithm
is sufficient. For instance, in Particle Swarm Optimization
(PSO) [33, 34], the solutions tends towards the global best.
If the global best be replaced with one non-dominated solution, the particles will be able to improve its accuracy as
they do in a single-objective search space. For improving
coverage, however, the search should be guided towards
different solutions. For instance, the gbet in PSO can be
replaced with a random non-dominated solution so that particles improve different regions of the Pareto optimal front
obtained. The main challenge here is the selection of nondominated solutions to guarantee improving the distribution
of Pareto optimal solutions.
There are different approaches in the literature for
improving the coverage of an algorithm. Archive and
leader selection in MOPSO, non-dominated sorting mechanism in NSGA, and niching [3537] are the most popular
approaches. In the next section, the multi-objective version
of the recently proposed ALO is proposed as an alternative approach for finding Pareto optimal solutions of
multi-objective problems.
S. Mirjalili et al.
where cumsum calculates the cumulative sum, n is the maximum number of iteration, tshows the step of random walk
1 if rand > 0.5
(iteration in this study), and r (t) =
is
0 if rand 0.5
a stochastic function where t shows the step of random
walk (iteration in this study) and rand is a random number
generated with uniform distribution in the interval of [0,1].
In order to keep the random walk in the boundaries of
the search space and prevent the ants from overshooting,
the random walks should be normalized using the following
equation:
t
Xi ai dit cit
t
(3.2)
+ cit
Xi =
(bi ai )
where cit is the minimum of i-th variable at t-th iteration, dit
indicates the maximum of i-th variable at t-th iteration, ai is
the minimum of random walk of i-th variable, and bi is the
maximum of random walk in i-th variable.
ALO simulates the entrapment of ants in antlions pits by
changing the random walks around antlions. The following
equations have been proposed in this regard:
cit = Antliontj + ct
dit
Antliontj
+d
(3.3)
(3.4)
(3.5)
(3.6)
if f Antit < f Antliontj (3.7)
t + Rt
RA
E
2
(3.8)
c
Ni
(3.9)
Ni
c
(3.10)
if f Antit f Antliontj
(3.11)
S. Mirjalili et al.
Table 1 Results of the multi-objective algorithms (using IGD) on the unconstrained test functions employed
Algorithm
MOALO
MOPSO
NSGA-II
Algorithm
MOALO
MOPSO
NSGA-II
Algorithm
MOALO
MOPSO
NSGA-II
ZDT1
Ave
0.01524
0.00422
0.05988
Std.
0.005022
0.003103
0.005436
Median
0.0166
0.0037
0.0574
Best
0.0061
0.0015
0.0546
Worst
0.0209
0.0101
0.0702
ZDT2
Ave
0.01751
0.00156
0.13972
Std.
0.010977
0.000174
0.026263
Median
0.0165
0.0017
0.1258
Best
0.0050
0.0013
0.1148
Worst
0.0377
0.0017
0.1834
ZDT3
Ave
0.03032
0.03782
0.04166
Std.
0.000969
0.006297
0.008073
Median
0.0323
0.0362
0.0403
Best
0.0303
0.0308
0.0315
Worst
0.0330
0.0497
0.0557
Median
0.0196
0.0098
0.0804
Best
0.0106
0.0012
0.0773
Worst
0.0330
0.0165
0.0924
Median
0.0288
0.0203
0.0584
Best
0.0191
0.0189
0.0371
Worst
0.0315
0.0225
0.0847
original ZDT suite, but the last two test functions are slightly
different in a same manner similar to [43]. We have deliberately modified ZDT1 and ZDT2 to create a linear and
3D front for benchmarking the performance of the MOALO
algorithm proposed. After all, the results are presented in
Table 1, Figs. 2 and 3.
Table 1 shows that the MOALO algorithm managed
to outperform the NSGA-II algorithm significantly on
all unconstrained test functions. The superiority can be
seen in all the columns, showing a higher accuracy
and better robustness of MOALO compared to NSGA-II.
The MOALO algorithm, however, shows very competitive results in comparison with the MOPSO algorithm and
occasionally outperforms it.
Fig. 2 Best Pareto optimal front obtained by the multi-objective algorithms on ZDT1, ZDT2, and ZDT3
S. Mirjalili et al.
Fig. 3 Best Pareto optimal front obtained by the multi-objective algorithms on ZDT1 with linear front and ZDT2 with 3 objectives
Table 2 Results of the multi-objective algorithms on CONSTR, TNK, SRN, BNH, and OSY constrained test problem
Algorithm
GD
Metric of spread
Metric of spacing
IGD
Std.
Ave
Std.
Ave
Std.
Ave
Std.
4.6424E-05
6.8558E04
2.4753E04
0.34585
0.94312
0.54863
1.0425E-02
3.6719E01
2.7171E02
0.0214
N/A
0.0437
0.0027
N/A
0.0041
1.4E-04
N/A
N/A
2.42E-05
N/A
N/A
5.4324E-05
4.5564E04
4.3465E04
0.64273
0.79363
0.82286
1.1525E-02
5.1029E02
2.8678E04
0.002
N/A
N/A
0.0001
N/A
N/A
6.2E-04
N/A
N/A
7.64E-05
N/A
N/A
3.4958E-06
2.0794E04
5.1034E04
0.3859
0.6655
0.3869
2.5242E-02
7.2196E02
2.5115E02
0.7030
N/A
1.586
0.102
N/A
0.133
0.3E-04
N/A
N/A
6.69E-06
N/A
N/A
5.6982E-05
N/A
N/A
0.3716
N/A
N/A
2.6356E-02
N/A
N/A
0.3357
0.6941
0.7756
0.024
0.038
0.072
2.4E-04
N/A
N/A
5.56E-05
N/A
N/A
2.58E-02
7.18e02
9.78E01
0.3716
N/A
N/A
2.6356E-02
N/A
N/A
0.4959
0.522
1.14
0.076
0.095
0.275
2.9E-02
N/A
N/A
1.17E-02
N/A
N/A
Ave
Fig. 4 Best Pareto optimal front obtained by MOALO for CONSTR, TNK, SRN, BNH, and OSY
S. Mirjalili et al.
GD
Ave
Std.
Metric of spread
Metric of spacing
IGD
Ave
Std.
Ave
Std.
Ave
Std.
0.370
N/A
N/A
0.00251
N/A
N/A
1.1805
2.5303
2.3635
0.144
0.227
0.255
0.1062
N/A
N/A
1.52E-02
N/A
N/A
0.839
N/A
N/A
0.1267
N/A
N/A
1.7706
16.685
2.7654
2.769
2.696
3.534
0.8672
N/A
N/A
1.49E-01
N/A
N/A
0.44958
0.46041
0.79717
0.05427
0.10961
0.06608
0.0421
N/A
N/A
0.0058
N/A
N/A
1.94E-2
N/A
N/A
.78E-3
N/A
N/A
0.19784
0.22478
0.88987
0.07962
0.09280
0.11976
0.0426
N/A
N/A
0.0077
N/A
N/A
1.52E-3
N/A
N/A
4.65E-3
N/A
N/A
0.76731
N/A
N/A
0.16853
N/A
N/A
0.00832
N/A
N/A
0.0029
N/A
N/A
1.89E-4
N/A
N/A
6.96E-5
N/A
N/A
0.8211
N/A
N/A
0.07845
N/A
N/A
0.00826
N/A
N/A
0.0007
N/A
N/A
1.42E-3
N/A
N/A
1.25E-3
N/A
N/A
0.40761
N/A
N/A
0.06802
N/A
N/A
0.0558
N/A
N/A
0.0057
N/A
N/A
1.22E-3
N/A
N/A
9.87E-4
N/A
N/A
Fig. 5 Best Pareto optimal front obtained by the multi-objective algorithms on the engineering design multiobjective problems: 4-bar truss
design, speed reduced design, disk brake design, welded beam deign,
S. Mirjalili et al.
5 Conclusion
This paper proposed the multi-objective version of the
recently proposed ALO algorithm called MOALO. With
maintaining the main search mechanism of ALO, MOALO
was designed with equipping ALO with an archive and
antlion selection mechanism based on Pareto optimal dominance. The algorithm was tested on 17 case studies including 5 unconstrained functions, 5 constrained functions, 7
and engineering design problems. The quantitative results
were collected using four performance indicators: GD, IGD,
metric of spread, and metric of spacing. Also, qualitative results were reported as the best Pareto optimal front
found in 10 runs. For results verification, the proposed
algorithm was compared to the well-regarded algorithms
in the field: NSGA-II and MOPSO. The results showed
that the MOALO is able to outperform NSGA-II on the
majority of the test functions and provide very competitive resulted compared to the MOPSO algorithm. It was
observed that MOALO benefits from high convergence and
coverage as well. The test functions employed are of different type and have diverse Pareto optimal fronts. The
results showed that MOALO can find Pareto optimal front
of any shape. Finally, the results of constrained engineering design problems testified that MOALO is capable of
solving challenging problems with many constraints and
unknown search spaces. Therefore, we conclude that the
proposed algorithm has merits among the current multiobjective algorithms and offer it as an alternative for solving
multi-objective optimization problems. Another conclusion
is made based on the NFL theorem: MOALO outperforms
other algorithms on the test functions, so it has the potential
to provide superior results on other problems as well. Note
that the source codes of the MOALO and ALO algorithms
are publicly available at http://alimirjalili.com/ALO.html.
For future works, it is recommended to apply MOALO
to other engineering design problems. Also, it is worth to
f1 (x) = x1
(A.17)
Minimize:
f2 (x) = x2
(A.18)
Minimize:
(A.19)
ZDT1:
Minimize:
f1 (x) = x1
Minimize:
Where:
(A.1)
9
G(x) = 1 +
N 1
N
(A.2)
(A.3)
xi
i=2
f1 (x)
g(x)
(A.4)
0 xi 1, 1 i 30
ZDT2:
Minimize:
f1 (x) = x1
Minimize:
(A.5)
G(x) = 1 +
9
N 1
G(x) = 1 +
(A.20)
(A.21)
Where:
9
xi
N 1
i=2
f1 (x) 2
h(f1 (x), g(x)) = 1
g(x)
0 xi 1, 1 i 30
N
Where:
N
i=2
f1 (x)
g(x)
CONSTR:
This problem has a convex Pareto front, and there are two
constraints and two design variables.
(A.6)
(A.7)
xi
Minimize:
f1 (x) = x1
(B.1)
Minimize:
f2 (x) = (1 + x2 )/(x1 )
(B.2)
Where:
(A.8)
0.1 x1 1, 0 x2 5
0 xi 1, 1 i 30
ZDT3:
Minimize:
f1 (x) = x1
Minimize:
Where:
G(x) = 1 +
(A.9)
9
xi
29
(A.10)
TNK:
The second problem has a discontinuous Pareto optima
front, and there are two constraints and two design
variables.
i=2
(A.11)
Minimize:
Minimize:
(A.12)
Where:
f1 (x)
g(x)
f1 (x)
sin(10f1 (x))
g(x)
0 xi 1, 1 i 30
f1 (x) = x1
Minimize:
Where:
9
G(x) = 1 +
N 1
(A.13)
N
xi
f1 (x)
g(x)
0 xi 1, 1 i 30
SRN:
The third problem has a continuous Pareto optimal front
proposed by Srinivas and Deb [46].
(A.14)
(A.15)
i=2
(B.3)
(B.4)
x
1
g1 (x) = x12 x22 +1+0.1Cos 16arctan
x2
g2 (x) = 0.5 (x1 0.5)2 (x2 0.5)2
0.1 x1 , 0 x2
f1 (x) = x1
f2 (x) = x2
(A.16)
Minimize:
(B.5)
Minimize:
(B.6)
Where:
g1 (x) =
x12
+ x22
255
g2 (x) = x1 3x2 + 10
20 x1 20, 20 x2 20
S. Mirjalili et al.
BNH:
This problem was first proposed by Binh and Korn [47]:
Minimize:
f1 (x) =
Minimize:
Where:
4x12
+ 4x22
2
(B.7)
2
(B.8)
Minimize:
Where:
+14.9334 x(3))...
43.0934) 1.508 x(1) (x(6)2 + x(7)2
+ 7.4777 (x(6)3 + x(7)3 )...
Minimize:
g3 (x) = 2 x1 + x2
g5 (x) = 4 + x4 + (x3 3)2
g6 (x) = 4 x6 (x5 3)2
0 x1 10, 0 x2 10, 1 x3 5, 0 x4
6, 1 x5 5, 0 x6 10
2
2 sqrt (2)
Minimize: f2 (x) = 0.01 (
+
....
x(1)
x(2)
((2 sqrt (2))/x(3)) + (2/x(1)))
(C.2)
1 x1 3, 1.4142 x2 3, 1.4142 x3 3, 1 x4 3
g2 (x) = 6 + x1 + x2
g4 (x) = 2 + x1 3x2
(C.3)
(B.10)
g1 (x) = 2 x1 x2
Where:
(C.4)
Minimize:
Where:
Where
6000
sqrt (2) x(1) x(2)
D
= Q
J
=
x(2)
= sqrt 2 + 2
+ 2
2D
504000
=
x(4) x(3)2
30 106
tmpf = 4.013
196
6
sqrt 30
48
x(4)
Minimize:
(C.9)
(C.10)
Where:
f1 (x) = Max
(C.11)
Minimize:
(C.12)
Dext 340mm
Where:
S. Mirjalili et al.
f1 (x) = Max
Maximize:
Where:
(C.13)
(C.14)
References
1. Kelley CT (1999) Detection and remediation of stagnation in
the NelderMead algorithm using a sufficient decrease condition.
SIAM J Optim 10:4355
2. Vogl TP, Mangis J, Rigler A, Zink W, Alkon D (1988) Accelerating the convergence of the back-propagation method. Biol Cybern
59:257263
3. Marler RT, Arora JS (2004) Survey of multi-objective optimization methods for engineering. Struct Multidiscip Optim 26:369
395
4. Coello CAC (2002) Theoretical and numerical constraint-handling
techniques used with evolutionary algorithms: a survey of the state
of the art. Comput Methods Appl Mech Eng 191:12451287
5. Beyer H.-G., Sendhoff B (2007) Robust optimizationa comprehensive survey. Comput Methods Appl Mech Eng 196:3190
3218
6. Knowles JD, Watson RA, Corne DW (2001) Reducing local
optima in single-objective problems by multi-objectivization. In:
Evolutionary multi-criterion optimization, pp 269283
7. Deb K, Goldberg DE (1993) Analyzing deception in trap functions. Found Genet Algoritm 2:93108
8. Deb K (2001) Multi-objective optimization using evolutionary
algorithms, vol 16. Wiley
9. Coello CAC, Lamont GB, Van Veldhuisen DA (2007) Evolutionary algorithms for solving multi-objective problems. Springer
10. Branke J, Deb K, Dierolf H, Osswald M (2004) Finding knees
in multi-objective optimization. In: Parallel problem solving from
nature-PPSN VIII, pp 722731
11. Das I, Dennis JE (1998) Normal-boundary intersection: a new
method for generating the Pareto surface in nonlinear multicriteria
optimization problems. SIAM J Optim 8:631657
12. Kim IY, De Weck O (2005) Adaptive weighted-sum method for
bi-objective optimization: Pareto front generation. Struct Multidiscip Optim 29:149158
13. Messac A, Mattson CA (2002) Generating well-distributed sets of
Pareto points for engineering design using physical programming.
Optim Eng 3:431450
14. Deb K, Agrawal S, Pratap A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic algorithm for multi-objective
optimization: NSGA-II. In: Parallel problem solving from nature
PPSN VI, pp 849858
15. Deb K, Goel T (2001) Controlled elitist non-dominated sorting
genetic algorithms for better convergence. In: Evolutionary multicriterion optimization, pp 6781
16. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol
Comput 6:182197
17. Coello CAC, Lechuga MS (2002) MOPSO: a proposal for multiple objective particle swarm optimization. In: Proceedings of
the 2002 congress on evolutionary computation, 2002. CEC02,
pp 10511056
18. Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans Evol
Comput 8:256279
19. Akbari R, Hedayatzadeh R, Ziarati K, Hassanizadeh B (2012)
A multi-objective artificial bee colony algorithm. Swarm Evol
Comput 2:3952
20. Yang X-S (2011) Bat algorithm for multi-objective optimisation.
Int J Bio-Inspired Comput 3:267274
21. Mirjalili S, Saremi S, Mirjalili SM, Coelho L. d. S. (2016)
Multi-objective grey wolf optimizer: a novel algorithm for multicriterion optimization. Expert Syst Appl 47:106119
22. Wolpert DH, Macready WG (1997) No free lunch theorems for
optimization. IEEE Trans Evol Comput 1:6782
23. Ngatchou P, Zarei A, El-Sharkawi M (2005) Pareto multi objective optimization. In: Proceedings of the 13th international conference on intelligent systems application to power systems, 2005,
pp 8491
24. Pareto V (1964) Cours deconomie politique: Librairie Droz
25. Edgeworth FY (1881) Mathematical physics. P. Keagan, London
26. Zitzler E (1999) Evolutionary algorithms for multiobjective optimization: methods and applications, vol 63. Citeseer
27. Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms:
a comparative case study and the strength pareto approach. IEEE
Trans Evol Comput 3:257271
28. Srinivas N, Deb K (1994) Muiltiobjective optimization using nondominated sorting in genetic algorithms. Evol Comput 2:221
248
29. Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary
algorithm based on decomposition. IEEE Trans Evol Comput
11:712731
30. Knowles JD, Corne DW (2000) Approximating the nondominated
front using the Pareto archived evolution strategy. Evol Comput
8:149172
31. Abbass HA, Sarker R, Newton C (2001) PDE: a Pareto-frontier
differential evolution approach for multi-objective optimization
problems. In: Proceedings of the 2001 congress on evolutionary
computation, 2001, pp 971978
32. Branke J, Kauler T, Schmeck H (2001) Guidance in evolutionary multi-objective optimization. Adv Eng Softw 32:499
507
33. Eberhart RC, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international
symposium on micro machine and human science, pp 39
43
34. Kennedy J (2011) Particle swarm optimization. In: Encyclopedia
of machine learning. Springer, pp 760766
35. Horn J, Nafpliotis N, Goldberg DE (1994) A niched Pareto genetic
algorithm for multiobjective optimization. In: Proceedings of the
1st IEEE conference on evolutionary computation, 1994. IEEE
world congress on computational intelligence, pp 8287
36. Mahfoud SW (1995) Niching methods for genetic algorithms.
Urbana 51:6294
37. Horn JR, Nafpliotis N, Goldberg DE (1993) Multiobjective optimization using the niched pareto genetic algorithm. IlliGAL
report, pp 618012296
38. Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw 83:80
98
39. Van Veldhuizen DA, Lamont GB (1998) Multiobjective evolutionary algorithm research: a history and analysis. Citeseer
46. Srinivasan N, Deb K (1994) Multi-objective function optimisation using non-dominated sorting genetic algorithm. Evol Comput
2:221248
47. Binh TT, Korn U (1997) MOBES: A multiobjective evolution
strategy for constrained optimization problems. In: The 3rd international conference on genetic algorithms (Mendel 97), p 7
48. Osyczka A, Kundu S (1995) A new method to solve generalized multicriteria optimization problems using the simple genetic
algorithm. Struct Optim 10:9499
49. Coello CC, Pulido GT (2005) Multiobjective structural optimization using a microgenetic algorithm. Struct Multidiscip Optim
30:388403
50. Kurpati A, Azarm S, Wu J (2002) Constraint handling improvements for multiobjective genetic algorithms. Struct Multidiscip
Optim 23:204213
51. Ray T, Liew KM (2002) A swarm metaphor for multiobjective
design optimization. Eng Optim 34:141153
52. Moussouni F, Brisset S, Brochet P (2007) Some results on the
design of brushless DC wheel motor using SQP and GA. Int J Appl
Electromagn Mech 26:233241