0% found this document useful (0 votes)
181 views

Harris Hawk Optimizer

A novel algorithm to find out optimized results

Uploaded by

Zohrul Islam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
181 views

Harris Hawk Optimizer

A novel algorithm to find out optimized results

Uploaded by

Zohrul Islam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Future Generation Computer Systems 97 (2019) 849–872

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

Harris hawks optimization: Algorithm and applications


Ali Asghar Heidari a,b , Seyedali Mirjalili c , Hossam Faris d , Ibrahim Aljarah d ,

Majdi Mafarja e , Huiling Chen f ,
a
School of Surveying and Geospatial Engineering, University of Tehran, Tehran, Iran
b
Department of Computer Science, School of Computing, National University of Singapore, Singapore
c
School of Information and Communication Technology, Griffith University, Nathan, Brisbane, QLD 4111, Australia
d
King Abdullah II School for Information Technology, The University of Jordan, Amman, Jordan
e
Department of Computer Science, Birzeit University, POBox 14, West Bank, Palestine
f
Department of Computer Science, Wenzhou University, Wenzhou 325035, China

highlights

• A mathematical model is proposed to simulate the hunting behavior of Harris’ Hawks.


• An optimization algorithm is proposed using the mathematical model.
• The proposed HHO algorithm is tested on several benchmarks.
• The performance of HHO is also examined on several engineering design problems.
• The results show the merits of the HHO algorithm as compared to the existing algorithms.

article info a b s t r a c t

Article history: In this paper, a novel population-based, nature-inspired optimization paradigm is proposed, which is
Received 2 June 2018 called Harris Hawks Optimizer (HHO). The main inspiration of HHO is the cooperative behavior and
Received in revised form 29 December 2018 chasing style of Harris’ hawks in nature called surprise pounce. In this intelligent strategy, several
Accepted 18 February 2019
hawks cooperatively pounce a prey from different directions in an attempt to surprise it. Harris
Available online 28 February 2019
hawks can reveal a variety of chasing patterns based on the dynamic nature of scenarios and escaping
Keywords: patterns of the prey. This work mathematically mimics such dynamic patterns and behaviors to develop
Nature-inspired computing an optimization algorithm. The effectiveness of the proposed HHO optimizer is checked, through a
Harris hawks optimization algorithm comparison with other nature-inspired techniques, on 29 benchmark problems and several real-world
Swarm intelligence engineering problems. The statistical results and comparisons show that the HHO algorithm provides
Optimization very promising and occasionally competitive results compared to well-established metaheuristic
Metaheuristic techniques. Source codes of HHO are publicly available at http://www.alimirjalili.com/HHO.html and
http://www.evo-ml.com/2019/03/02/hho.
© 2019 Elsevier B.V. All rights reserved.

1. Introduction with many larger-scale real-world multimodal, non-continuous,


and non-differentiable problems [5]. Accordingly, metaheuristic
Many real-world problems in machine learning and artificial algorithms have been designed and utilized for tackling many
intelligence have generally a continuous, discrete, constrained or problems as competitive alternative solvers, which is because of
unconstrained nature [1,2]. Due to these characteristics, it is hard their simplicity and easy implementation process. In addition,
to tackle some classes of problems using conventional mathe- the core operations of these methods do not rely on gradient
matical programming approaches such as conjugate gradient, se- information of the objective landscape or its mathematical traits.
quential quadratic programming, fast steepest, and quasi-Newton However, the common shortcoming for the majority of meta-
methods [3,4]. Several types of research have verified that these heuristic algorithms is that they often show a delicate sensitivity
methods are not efficient enough or always efficient in dealing to the tuning of user-defined parameters. Another drawback is
that the metaheuristic algorithms may not always converge to the
global optimum. [6]
∗ Corresponding author.
In general, metaheuristic algorithms have two types [7]; single
E-mail addresses: [email protected], [email protected], solution based (i.g. Simulated Annealing (SA) [8]) and population-
[email protected] (A.A. Heidari), [email protected]
(S. Mirjalili), [email protected] (H. Faris), [email protected] (I. Aljarah),
based (i.g. Genetic Algorithm (GA) [9]). As the name indicates,
[email protected] (M. Mafarja), [email protected], in the former type, only one solution is processed during the
[email protected] (H. Chen). optimization phase, while in the latter type, a set of solutions

https://doi.org/10.1016/j.future.2019.02.028
0167-739X/© 2019 Elsevier B.V. All rights reserved.
850 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

(i.e. population) are evolved in each iteration of the optimization


process. Population-based techniques can often find an optimal or
suboptimal solution that may be same with the exact optimum
or located in its neighborhood. Population-based metaheuris-
tic (P-metaheuristics) techniques mostly mimic natural phenom-
ena [10–13]. These algorithms start the optimization process
by generating a set (population) of individuals, where each in-
dividual in the population represents a candidate solution to
the optimization problem. The population will be evolved itera-
tively by replacing the current population with a newly generated
population using some often stochastic operators [14,15]. The op-
timization process is proceeded until satisfying a stopping criteria
(i.e. maximum number of iterations) [16,17].
Based on the inspiration, P-metaheuristics can be categorized
in four main groups [18,19] (see Fig. 1): Evolutionary Algorithms
(EAs), Physics-based, Human-based, and Swarm Intelligence (SI)
algorithms. EAs mimic the biological evolutionary behaviors such
as recombination, mutation, and selection. The most popular EA is
the GA that mimics the Darwinian theory of evolution [20]. Other
popular examples of EAs are Differential Evolution (DE) [21],
Genetic Programming (GP) [20], and Biogeography-Based Opti-
mizer (BBO) [22]. Physics-based algorithms are inspired by the Fig. 1. Classification of meta-heuristic techniques (meta-heuristic diamond).

physical laws. Some examples of these algorithms are Big-Bang


Big-Crunch (BBBC) [23], Central Force Optimization (CFO) [24],
and Gravitational Search Algorithm (GSA) [25]. Salcedo-Sanz [26] them to all possible optimization tasks. According to NFL theo-
has deeply reviewed several physic-based optimizers. The third rem, we cannot theoretically consider an algorithm as a general-
category of P-metaheuristics includes the set of algorithms that purpose universally-best optimizer. Hence, NFL theorem encour-
mimic some human behaviors. Some examples of the human- ages searching for developing more efficient optimizers. As a
based algorithms are Tabu Search (TS) [27], Socio Evolution and result of NFL theorem, besides the widespread studies on the
Learning Optimization (SELO) [28], and Teaching Learning Based efficacy, performance aspects and results of traditional EAs and SI
Optimization (TLBO) [29]. As the last class of P-metaheuristics, algorithms, new optimizers with specific global and local search-
SI algorithms mimic the social behaviors (e.g. decentralized, self- ing strategies are emerging in recent years to provide more
organized systems) of organisms living in swarms, flocks, or variety of choices for researchers and experts in different fields.
herds [30,31]. For instance, the birds flocking behaviors is the In this paper, a new nature-inspired optimization technique
main inspiration of the Particle Swarm Optimization (PSO) pro- is proposed to compete with other optimizers. The main idea
posed by Eberhart and Kennedy [32]. In PSO, each particle in behind the proposed optimizer is inspired from the cooperative
the swarm represents a candidate solution to the optimization behaviors of one of the most intelligent birds, Harris’ Hawks, in
problem. In the optimization process, each particle is updated hunting escaping preys (rabbits in most cases) [37]. For this pur-
with regard to the position of the global best particle and its pose, a new mathematical model is developed in this paper. Then,
own (local) best position. Ant Colony Optimization (ACO) [33], a stochastic metaheuristic is designed based on the proposed
Cuckoo Search (CS) [34], and Artificial Bee Colony (ABC) are other mathematical model to tackle various optimization problems. It
examples of the SI techniques. should be noted that the name Harris’s hawk and a similar inspi-
Regardless of the variety of these algorithms, there is a com- ration have been used in [38], in which the authors modified the
mon feature: the searching steps have two phases: exploration mathematical model of Grey Wolf Optimizer (MOGWO and GWO)
(diversification) and exploitation (intensification) [26]. In the ex- and used it to solve multi-objective optimization problems. There
ploration phase, the algorithm should utilize and promote its were no new mathematical equations and the algorithm was
randomized operators as much as possible to deeply explore developed completely based on MOGWO and GWO. In this work,
various regions and sides of the feature space. Hence, the ex- however, we proposed new mathematical models to mimic all the
ploratory behaviors of a well-designed optimizer should have stages of hunts used by Harris’s hawks and solve single-objective
an enriched-enough random nature to efficiently allocate more optimization problems efficiently
randomly-generated solutions to different areas of the problem The rest of this research is organized as follows. Section 2
topography during early steps of the searching process [35]. The represents the background inspiration and info about the cooper-
exploitation stage is normally performed after the exploration ative life of Harris’ hawks. Section 3 represents the mathematical
phase. In this phase, the optimizer tries to focus on the neigh- model and computational procedures of the HHO algorithm. The
borhood of better-quality solutions located inside the feature results of HHO in solving different benchmark and real-world
space. It actually intensifies the searching process in a local region case studies are presented in Section 4. Section 5 discusses the
instead of all-inclusive regions of the landscape. A well-organized results. Finally, Section 6 concludes the work with some useful
optimizer should be capable of making a reasonable, fine balance perspectives.
between the exploration and exploitation tendencies. Otherwise,
the possibility of being trapped in local optima (LO) and immature 2. Background
convergence drawbacks increases.
We have witnessed a growing interest and awareness in the In 1997, Louis Lefebvre proposed an approach to measure
successful, inexpensive, efficient application of EAs and SI al- the avian ‘‘IQ’’ based on the observed innovations in feeding
gorithms in recent years. However, referring to No Free Lunch behaviors [39]. Based on his studies [39–42], the hawks can be
(NFL) theorem [36], all optimization algorithms proposed so- listed amongst the most intelligent birds in nature. The Harris’
far show an equivalent performance on average if we apply hawk (Parabuteo unicinctus) is a well-known bird of prey that
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 851

Fig. 2. Harris’s hawk.

survives in somewhat steady groups found in southern half of


Arizona, USA [37]. Harmonized foraging involving several animals
for catching and then, sharing the slain animal has been persua-
sively observed for only particular mammalian carnivores. The
Harris’s hawk is distinguished because of its unique cooperative
foraging activities together with other family members living
in the same stable group while other raptors usually attack to
discover and catch a quarry, alone. This avian desert predator
shows evolved innovative team chasing capabilities in tracing,
encircling, flushing out, and eventually attacking the potential
quarry. These smart birds can organize dinner parties consisting
of several individuals in the non-breeding season. They are known
as truly cooperative predators in the raptor realm. As reported
by Bednarz [37] in 1998, they begin the team mission at morning
twilight, with leaving the rest roosts and often perching on giant
trees or power poles inside their home realm. They know their
family members and try to be aware of their moves during the
attack. When assembled and party gets started, some hawks one
after the other make short tours and then, land on rather high
perches. In this manner, the hawks occasionally will perform a
‘‘leapfrog’’ motion all over the target site and they rejoin and split
several times to actively search for the covered animal, which is
usually a rabbit.1
The main tactic of Harris’ hawks to capture a prey is ‘‘surprise
pounce’’, which is also known as ‘‘seven kills’’ strategy. In this in-
telligent strategy, several hawks try to cooperatively attack from Fig. 3. Different phases of HHO.
different directions and simultaneously converge on a detected
escaping rabbit outside the cover. The attack may rapidly be
completed by capturing the surprised prey in few seconds, but 3. Harris hawks optimization (HHO)
occasionally, regarding the escaping capabilities and behaviors
of the prey, the seven kills may include multiple, short-length, In this section, we model the exploratory and exploitative
quick dives nearby the prey during several minutes. Harris’ hawks phases of the proposed HHO inspired by the exploring a prey, sur-
can demonstrate a variety of chasing styles dependent on the prise pounce, and different attacking strategies of Harris hawks.
dynamic nature of circumstances and escaping patterns of a prey.
HHO is a population-based, gradient-free optimization technique;
A switching tactic occurs when the best hawk (leader) stoops at
hence, it can be applied to any optimization problem subject to
the prey and get lost, and the chase will be continued by one of
a proper formulation. Fig. 3 shows all phases of HHO, which are
the party members. These switching activities can be observed
described in the next subsections.
in different situations because they are beneficial for confusing
the escaping rabbit. The main advantage of these cooperative
tactics is that the Harris’ hawks can pursue the detected rabbit 3.1. Exploration phase
to exhaustion, which increases its vulnerability. Moreover, by
perplexing the escaping prey, it cannot recover its defensive In this part, the exploration mechanism of HHO is proposed. If
capabilities and finally, it cannot escape from the confronted team we consider the nature of Harris’ hawks, they can track and detect
besiege since one of the hawks, which is often the most powerful the prey by their powerful eyes, but occasionally the prey cannot
and experienced one, effortlessly captures the tired rabbit and be seen easily. Hence, the hawks wait, observe, and monitor the
shares it with other party members. Harris’ hawks and their main desert site to detect a prey maybe after several hours. In HHO, the
behaviors can be seen in nature, as captured in Fig. 2. Harris’ hawks are the candidate solutions and the best candidate
solution in each step is considered as the intended prey or nearly
1 Interested readers can refer to the following documentary videos: (a) the optimum. In HHO, the Harris’ hawks perch randomly on some
https://bit.ly/2Qew2qN, (b) https://bit.ly/2qsh8Cl, (c) https://bit.ly/2P7OMvH, (d) locations and wait to detect a prey based on two strategies. If
https://bit.ly/2DosJdS. we consider an equal chance q for each perching strategy, they
852 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

perch based on the positions of other family members (to be close


enough to them when attacking) and the rabbit, which is modeled
in Eq. (1) for the condition of q < 0.5, or perch on random tall
trees (random locations inside the group’s home range), which is
modeled in Eq. (1) for condition of q ≥ 0.5.

q ≥ 0.5
{
Xrand (t) − r1 |Xrand (t) − 2r2 X (t)|
X (t + 1) =
(Xrabbit (t) − Xm (t)) − r3 (LB + r4 (UB − LB)) q < 0.5
(1)

where X (t + 1) is the position vector of hawks in the next


iteration t, Xrabbit (t) is the position of rabbit, X (t) is the current
position vector of hawks, r1 , r2 , r3 , r4 , and q are random numbers
inside (0,1), which are updated in each iteration, LB and UB show
the upper and lower bounds of variables, Xrand (t) is a randomly
selected hawk from the current population, and Xm is the average Fig. 4. Behavior of E during two runs and 500 iterations.
position of the current population of hawks.
We proposed a simple model to generate random locations
inside the group’s home range (LB, UB). The first rule generates 3.3. Exploitation phase
solutions based on a random location and other hawks. In second
rule of Eq. (1), we have the difference of the location of best so
In this phase, the Harris’ hawks perform the surprise pounce
far and the average position of the group plus a randomly-scaled
(seven kills as called in [37]) by attacking the intended prey
component based on range of variables, while r3 is a scaling
coefficient to further increase the random nature of rule once detected in the previous phase. However, preys often attempt to
r4 takes close values to 1 and similar distribution patterns may escape from dangerous situations. Hence, different chasing styles
occur. In this rule, we add a randomly scaled movement length occur in real situations. According to the escaping behaviors of
to the LB. Then, we considered a random scaling coefficient for the prey and chasing strategies of the Harris’ hawks, four possible
the component to provide more diversification trends and explore strategies are proposed in the HHO to model the attacking stage.
different regions of the feature space. It is possible to construct The preys always try to escape from threatening situations.
different updating rules, but we utilized the simplest rule, which Suppose that r is the chance of a prey in successfully escaping
is able to mimic the behaviors of hawks. The average position of (r < 0.5) or not successfully escaping (r ≥0.5) before surprise
hawks is attained using Eq. (2): pounce. Whatever the prey does, the hawks will perform a hard
or soft besiege to catch the prey. It means that they will encircle
N
1 ∑ the prey from different directions softly or hard depending on
Xm (t) = Xi (t) (2)
N the retained energy of the prey. In real situations, the hawks get
i=1
closer and closer to the intended prey to increase their chances
where Xi (t) indicates the location of each hawk in iteration t and in cooperatively killing the rabbit by performing the surprise
N denotes the total number of hawks. It is possible to obtain the pounce. After several minutes, the escaping prey will lose more
average location in different ways, but we utilized the simplest and more energy; then, the hawks intensify the besiege process
rule. to effortlessly catch the exhausted prey. To model this strategy
and enable the HHO to switch between soft and hard besiege
3.2. Transition from exploration to exploitation processes, the E parameter is utilized.
In this regard, when |E | ≥0.5, the soft besiege happens, and
The HHO algorithm can transfer from exploration to exploita- when |E | <0.5, the hard besiege occurs.
tion and then, change between different exploitative behaviors
based on the escaping energy of the prey. The energy of a prey
decreases considerably during the escaping behavior. To model 3.3.1. Soft besiege
this fact, the energy of a prey is modeled as: When r ≥ 0.5 and |E | ≥ 0.5, the rabbit still has enough
energy, and try to escape by some random misleading jumps but
t
E = 2E0 (1 − ) (3) finally it cannot. During these attempts, the Harris’ hawks encircle
T it softly to make the rabbit more exhausted and then perform the
where E indicates the escaping energy of the prey, T is the maxi- surprise pounce. This behavior is modeled by the following rules:
mum number of iterations, and E0 is the initial state of its energy.
In HHO, E0 randomly changes inside the interval (−1, 1) at each
iteration. When the value of E0 decreases from 0 to −1, the rabbit X (t + 1) = ∆X (t) − E |JXrabbit (t) − X (t)| (4)
is physically flagging, whilst when the value of E0 increases from
0 to 1, it means that the rabbit is strengthening. The dynamic
escaping energy E has a decreasing trend during the iterations. ∆X (t) = Xrabbit (t) − X (t) (5)
When the escaping energy |E | ≥1, the hawks search different
regions to explore a rabbit location, hence, the HHO performs the where ∆X (t) is the difference between the position vector of
exploration phase, and when |E | <1, the algorithm try to exploit the rabbit and the current location in iteration t, r5 is a random
the neighborhood of the solutions during the exploitation steps. number inside (0,1), and J = 2(1 − r5 ) represents the random
In short, exploration happens when |E | ≥1, while exploitation jump strength of the rabbit throughout the escaping procedure.
happens in later steps when |E | <1. The time-dependent behavior The J value changes randomly in each iteration to simulate the
of E is also demonstrated in Fig. 4. nature of rabbit motions.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 853

when they wish to catch the prey in the competitive situations.


Therefore, to perform a soft besiege, we supposed that the hawks
can evaluate (decide) their next move based on the following rule
in Eq. (7):
Y = Xrabbit (t) − E |JXrabbit (t) − X (t)| (7)
Then, they compare the possible result of such a movement to the
previous dive to detect that will it be a good dive or not. If it was
not reasonable (when they see that the prey is performing more
deceptive motions), they also start to perform irregular, abrupt,
Fig. 5. Example of overall vectors in the case of hard besiege. and rapid dives when approaching the rabbit. We supposed that
they will dive based on the LF-based patterns using the following
rule:
Z = Y + S × LF (D) (8)
where D is the dimension of problem and S is a random vector by
size 1 × D and LF is the levy flight function, which is calculated
using Eq. (9) [49]:
) β1
Γ (1 + β ) × sin( πβ
(
u×σ )
LF (x) = 0.01 × 1
,σ = 2
β−1
(9)
|v| β Γ ( 1+β
2
) × β × 2( 2
)

where u, v are random values inside (0,1), β is a default constant


set to 1.5.
Hence, the final strategy for updating the positions of hawks
in the soft besiege phase can be performed by Eq. (10):
if F (Y ) < F (X (t))
{
Y
X (t + 1) = (10)
Z if F (Z ) < F (X (t))
where Y and Z are obtained using Eqs. (7) and (8).
Fig. 6. Example of overall vectors in the case of soft besiege with progressive
rapid dives. A simple illustration of this step for one hawk is demonstrated
in Fig. 6. Note that the position history of LF-based leapfrog
movement patterns during some iterations are also recorded
and shown in this illustration. The colored dots are the location
3.3.2. Hard besiege
footprints of LF-based patterns in one trial and then, the HHO
When r ≥0.5 and |E | <0.5, the prey is so exhausted and it
reaches to the location Z . In each step, only the better position Y
has a low escaping energy. In addition, the Harris’ hawks hardly
or Z will be selected as the next location. This strategy is applied
encircle the intended prey to finally perform the surprise pounce.
to all search agents.
In this situation, the current positions are updated using Eq. (6):
X (t + 1) = Xrabbit (t) − E |∆X (t)| (6) 3.3.4. Hard besiege with progressive rapid dives
When |E | <0.5 and r < 0.5, the rabbit has not enough energy
A simple example of this step with one hawk is depicted in
to escape and a hard besiege is constructed before the surprise
Fig. 5.
pounce to catch and kill the prey. The situation of this step in the
prey side is similar to that in the soft besiege, but this time, the
3.3.3. Soft besiege with progressive rapid dives
hawks try to decrease the distance of their average location with
When still |E | ≥0.5 but r < 0.5, the rabbit has enough energy
the escaping prey. Therefore, the following rule is performed in
to successfully escape and still a soft besiege is constructed before
hard besiege condition:
the surprise pounce. This procedure is more intelligent than the
if F (Y ) < F (X (t))
{
previous case. Y
X (t + 1) = (11)
To mathematically model the escaping patterns of the prey Z if F (Z ) < F (X (t))
and leapfrog movements (as called in [37]), the levy flight (LF)
concept is utilized in the HHO algorithm. The LF is utilized to where Y and Z are obtained using new rules in Eqs. (12) and (13).
mimic the real zigzag deceptive motions of preys (particular-
ity rabbits) during escaping phase and irregular, abrupt, and Y = Xrabbit (t) − E |JXrabbit (t) − Xm (t)| (12)
rapid dives of hawks around the escaping prey. Actually, hawks
perform several team rapid dives around the rabbit and try to
progressively correct their location and directions with regard to Z = Y + S × LF (D) (13)
the deceptive motions of prey. This mechanism is also supported
by real observations in other competitive situations in nature. where Xm (t) is obtained using Eq. (2). A simple example of this
It has been confirmed that LF-based activities are the optimal step is demonstrated in Fig. 7. Note that the colored dots are the
searching tactics for foragers/predators in non-destructive for- location footprints of LF-based patterns in one trial and only Y or
aging conditions [43,44]. In addition, it has been detected the Z will be the next location for the new iteration.
LF-based patterns can be detected in the chasing activities of
animals like monkeys and sharks [45–48]. Hence, the LF-based 3.4. Pseudocode of HHO
motions were utilized within this phase of HHO technique.
Inspired by real behaviors of hawks, we supposed that they The pseudocode of the proposed HHO algorithm is reported in
can progressively select the best possible dive toward the prey Algorithm 1.
854 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

Algorithm 1 Pseudo-code of HHO algorithm


Inputs: The population size N and maximum number of
iterations T
Outputs: The location of rabbit and its fitness value
Initialize the random population Xi (i = 1, 2, . . . , N)
while (stopping condition is not met) do
Calculate the fitness values of hawks
Set Xrabbit as the location of rabbit (best location)
for (each hawk (Xi )) do
Update the initial energy E0 and jump strength J ▷
E0 =2rand()-1, J=2(1-rand())
Update the E using Eq. (3)
if (|E |≥ 1) then ▷ Exploration phase
Update the location vector using Eq. (1)
if (|E |< 1) then ▷ Exploitation phase
if (r ≥0.5 and |E |≥ 0.5 ) then ▷ Soft besiege
Update the location vector using Eq. (4)
else if (r ≥0.5 and |E |< 0.5 ) then ▷ Hard besiege
Update the location vector using Eq. (6)
else if (r < 0.5 and |E |≥ 0.5 ) then ▷ Soft besiege
with progressive rapid dives
Update the location vector using Eq. (10)
else if (r < 0.5 and |E |< 0.5 ) then ▷ Hard besiege
with progressive rapid dives
Update the location vector using Eq. (11)
Return Xrabbit

4. Experimental results and discussions


Fig. 7. Example of overall vectors in the case of hard besiege with progressive
rapid dives in 2D and 3D space.
4.1. Benchmark set and compared algorithms

In order to investigate the efficacy of the proposed HHO op-


3.5. Computational complexity timizer, a well-studied set of diverse benchmark functions are
selected from literature [50,51]. This benchmark set covers three
main groups of benchmark landscapes: unimodal (UM), multi-
Note that the computational complexity of the HHO mainly modal (MM), and composition (CM). The UM functions (F1–F7)
depends on three processes: initialization, fitness evaluation, and with unique global best can reveal the exploitative (intensifica-
updating of hawks. Note that with N hawks, the computational tion) capacities of different optimizers, while the MM functions
complexity of the initialization process is O(N). The computa- (F8–F23) can disclose the exploration (diversification) and LO
tional complexity of the updating mechanism is O(T × N) + avoidance potentials of algorithms. The mathematical formula-
O(T × N × D), which is composed of searching for the best tion and characteristics of UM and MM problems are shown in
Tables 16–18 in Appendix A. The third group problems (F24–
location and updating the location vector of all hawks, where T
F29) are selected from IEEE CEC 2005 competition [52] and covers
is the maximum number of iterations and D is the dimension of hybrid composite, rotated and shifted MM test cases. These CM
specific problems. Therefore, computational complexity of HHO cases are also utilized in many papers and can expose the per-
is O(N ×(T + TD + 1)). formance of utilized optimizers in well balancing the exploration

Fig. 8. Demonstration of composition test functions.


A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 855

Table 1 the rabbit is also monitored during iterations. The search history
The parameter settings. diagram reveals the history of those positions visited by artificial
Algorithm Parameter Value hawks during iterations. The map of the trajectory of the first
DE Scaling factor 0.5 hawk monitors how the first variable of the first hawk varies
Crossover probability 0.5
during the steps of the process. The average fitness of hawks
PSO Topology fully connected
Inertia factor 0.3 monitors how the average fitness of whole population varies
c1 1 during the process of optimization. The convergence metric also
c2 1 reveals how the fitness value of the rabbit (best solution) varies
TLBO Teaching factor T 1, 2 during the optimization. Note that the diagram of escaping en-
GWO Convergence constant a [2 0]
ergy demonstrates how the energy of rabbit varies during the
MFO Convergence constant a [−2 −1]
Spiral factor b 1 simulation.
CS Discovery rate of alien solutions pa 0.25 From the history of sampled locations in Figs. 9–11, it can be
BA Qmin Frequency minimum 0 observed that the HHO reveals a similar pattern in dealing with
Qmax Frequency maximum 2
different cases, in which the hawks attempts to initially boost the
A Loudness 0.5
r Pulse rate 0.5
diversification and explore the favorable areas of solution space
FA α 0.5 and then exploit the vicinity of the best locations. The diagram
β 0.2 of trajectories can help us to comprehend the searching behavior
γ 1 of the foremost hawk (as a representative of the rest of hawks).
FPA Probability switch p 0.8
By this metric, we can check if the foremost hawk faces abrupt
BBO Habitat modification probability 1
Immigration probability limits [0, 1] changes during the early phases and gradual variations in the
Step size 1 concluding steps. Referring to Van Den Bergh and Engelbrecht
Max immigration (I) and Max emigration (E) 1 [60], these activities can guarantee that a P-metaheuristic finally
Mutation probability 0.005 convergences to a position and exploit the target region.
As per trajectories in Figs. 9–11, we see that the foremost
hawk start the searching procedure with sudden movements. The
and exploitation inclinations and escaping from LO in dealing amplitude of these variations covers more than 50% of the solu-
with challenging problems. Details of the CM test problems are tion space. This observation can disclose the exploration propen-
also reported in Table 19 in Appendix A. Fig. 8 demonstrates three sities of the proposed HHO. As times passes, the amplitude of
of composition test problems. these fluctuations gradually decreases. This point guarantees the
The results and performance of the proposed HHO is com- transition of HHO from exploratory trends to exploitative steps.
pared with other well-established optimization techniques such Eventually, the motion pattern of the first hawk becomes very
as the GA [22], BBO [22], DE [22], PSO [22], CS [34], TLBO [29], stable which shows that the HHO is exploiting the promising
BA/BAT [53], FPA [54], FA [55], GWO [56], and MFO [57] algo- regions during the concluding steps. By monitoring the average
rithms based on the best, worst, standard deviation (STD) and fitness of the population, the next measure, we can notice the
average of the results (AVG). These algorithms cover both recently reduction patterns in fitness values when the HHO enriches the
proposed techniques such as MFO, GWO, CS, TLBO, BAT, FPA, and excellence of the randomized candidate hawks. Based on the
FA and also, relatively the most utilized optimizers in the field diagrams demonstrated in Figs. 9–11, the HHO can enhance the
like the GA, DE, PSO, and BBO algorithms. quality of all hawks during half of the iterations and there is an
As recommended by Derrac et al. [58], the non-parametric accelerating decreasing pattern in all curves. Again, the amplitude
Wilcoxon statistical test with 5% degree of significance is also of variations of fitness results decreases by more iteration. Hence,
performed along with experimental assessments to detect the the HHO can dynamically focus on more promising areas during
significant differences between the attained results of different iterations. According to convergence curves in Fig. Figs. 9–11,
techniques. which shows the average fitness of best hawk found so far, we
can detect accelerated decreasing patterns in all curves, especially
4.2. Experimental setup after half of the iteration. We can also detect the estimated
moment that the HHO shift from exploration to exploitation. In
All algorithms were implemented under Matlab 7.10 (R2010a) this regard, it is observed that the HHO can reveal an accelerated
on a computer with a Windows 7 64-bit professional and 64 GB convergence trend.
RAM. The swarm size and maximum iterations of all optimizers
are set to 30 and 500, respectively. All results are recorded and 4.4. Scalability analysis
compared based on the average performance of optimizers over
30 independent runs. In this section, a scalability assessment is utilized to investi-
The settings of GA, PSO, DE and BBO algorithms are same with gate the impact of dimension on the results of HHO. This test has
those set by Dan Simon in the original work of BBO [22], while been utilized in the previous studies and it can reveal the impact
for the BA [53], FA [59], TLBO [29], GWO [56], FPA [54], CS [34], of dimensions on the quality of solutions for the HHO optimizer
and MFO [57], the parameters are same with the recommended to recognize its efficacy not only for problems with lower dimen-
settings in the original works. The used parameters are also sions but also for higher dimension tasks. In addition, it reveals
reported in Table 1. how a P-metaheuristic can preserve its searching advantages in
higher dimensions. For this experiment, the HHO is utilized to
4.3. Qualitative results of HHO tackle the scalable UM and MM F1–F13 test cases with 30, 100,
500, and 1000 dimensions. The average error AVG and STD of the
The qualitative results of HHO for several standard unimodal attained results of all optimizers over 30 independent runs and
and multimodal test problems are demonstrated in Figs. 9–11. 500 iterations are recorded and compared for each dimension.
These results include four well-known metrics: search history, Table 2 reveals the results of HHO versus other methods in
the trajectory of the first hawk, average fitness of population, dealing with F1–F13 problems with different dimensions. The
and convergence behavior. In addition, the escaping energy of scalability results for all techniques are also illustrated in Fig. 12.
856 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

Fig. 9. Qualitative results for unimodal F1, F3, and F4 problems.


A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 857

Fig. 10. Qualitative results for F7, F9, and F10 problems.
858 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

Fig. 11. Qualitative results for F13 problem.

Table 2
Results of HHO for different dimensions of scalable F1–F13 problems.
Problem/D Metric 30 100 500 1000
F1 AVG 3.95E−97 1.91E−94 1.46E−92 1.06E−94
STD 1.72E−96 8.66E−94 8.01E−92 4.97E−94
F2 AVG 1.56E−51 9.98E−52 7.87E−49 2.52E−50
STD 6.98E−51 2.66E−51 3.11E−48 5.02E−50
F3 AVG 1.92E−63 1.84E−59 6.54E−37 1.79E−17
STD 1.05E−62 1.01E−58 3.58E−36 9.81E−17
F4 AVG 1.02E−47 8.76E−47 1.29E−47 1.43E−46
STD 5.01E−47 4.79E−46 4.11E−47 7.74E−46
F5 AVG 1.32E−02 2.36E−02 3.10E−01 5.73E−01
STD 1.87E−02 2.99E−02 3.73E−01 1.40E+00
F6 AVG 1.15E−04 5.12E−04 2.94E−03 3.61E−03
STD 1.56E−04 6.77E−04 3.98E−03 5.38E−03
F7 AVG 1.40E−04 1.85E−04 2.51E−04 1.41E−04
STD 1.07E−04 4.06E−04 2.43E−04 1.63E−04
F8 AVG −1.25E+04 −4.19E+04 −2.09E+05 −4.19E+05
STD 1.47E+02 2.82E+00 2.84E+01 1.03E+02
F9 AVG 0.00E+00 0.00E+00 0.00E+00 0.00E+00
STD 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F10 AVG 8.88E−16 8.88E−16 8.88E−16 8.88E−16
STD 4.01E−31 4.01E−31 4.01E−31 4.01E−31
F11 AVG 0.00E+00 0.00E+00 0.00E+00 0.00E+00
STD 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F12 AVG 7.35E−06 4.23E−06 1.41E−06 1.02E−06
STD 1.19E−05 5.25E−06 1.48E−06 1.16E−06
F13 AVG 1.57E−04 9.13E−05 3.44E−04 8.41E−04
STD 2.15E−04 1.26E−04 4.75E−04 1.18E−03

Note that the detailed results of all techniques are reported in the 4.5. Quantitative results of HHO and discussion
next parts.
As it can be seen in Table 2, the HHO can expose excellent re- In this section, the results of HHO are compared with those of
sults in all dimensions and its performance remains consistently other optimizers for different dimensions of F1–F13 test problems
in addition to the F14–F29 MM and CM test cases. Note that
superior when realizing cases with many variables. As per curves
the results are presented for 30, 100, 500, and 1000 dimensions
in Fig. 12, it is observed that the optimality of results and the
of the scalable F1–F13 problems. Tables 3–6 show the obtained
performance of other methods significantly degrade by increasing results for HHO versus other competitors in dealing with scalable
the dimensions. This reveals that HHO is capable of maintain- functions. Table 8 also reveals the performance of algorithms
ing a good balance between the exploratory and exploitative in dealing with F14–F29 test problems. In order to investigate
tendencies on problems with many variables. the significant differences between the results of proposed HHO
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 859

Table 3
Results of benchmark functions (F1–F13), with 30 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 3.95E−97 1.03E+03 1.83E+04 7.59E+01 2.01E+03 1.18E−27 6.59E+04 7.11E−03 9.06E−04 1.01E+03 2.17E−89 1.33E−03
STD 1.72E−96 5.79E+02 3.01E+03 2.75E+01 5.60E+02 1.47E−27 7.51E+03 3.21E−03 4.55E−04 3.05E+03 3.14E−89 5.92E−04
F2 AVG 1.56E−51 2.47E+01 3.58E+02 1.36E−03 3.22E+01 9.71E−17 2.71E+08 4.34E−01 1.49E−01 3.19E+01 2.77E−45 6.83E−03
STD 6.98E−51 5.68E+00 1.35E+03 7.45E−03 5.55E+00 5.60E−17 1.30E+09 1.84E−01 2.79E−02 2.06E+01 3.11E−45 2.06E−03
F3 AVG 1.92E−63 2.65E+04 4.05E+04 1.21E+04 1.41E+03 5.12E−05 1.38E+05 1.66E+03 2.10E−01 2.43E+04 3.91E−18 3.97E+04
STD 1.05E−62 3.44E+03 8.21E+03 2.69E+03 5.59E+02 2.03E−04 4.72E+04 6.72E+02 5.69E−02 1.41E+04 8.04E−18 5.37E+03
F4 AVG 1.02E−47 5.17E+01 4.39E+01 3.02E+01 2.38E+01 1.24E−06 8.51E+01 1.11E−01 9.65E−02 7.00E+01 1.68E−36 1.15E+01
STD 5.01E−47 1.05E+01 3.64E+00 4.39E+00 2.77E+00 1.94E−06 2.95E+00 4.75E−02 1.94E−02 7.06E+00 1.47E−36 2.37E+00
F5 AVG 1.32E−02 1.95E+04 1.96E+07 1.82E+03 3.17E+05 2.70E+01 2.10E+08 7.97E+01 2.76E+01 7.35E+03 2.54E+01 1.06E+02
STD 1.87E−02 1.31E+04 6.25E+06 9.40E+02 1.75E+05 7.78E−01 4.17E+07 7.39E+01 4.51E−01 2.26E+04 4.26E−01 1.01E+02
F6 AVG 1.15E−04 9.01E+02 1.87E+04 6.71E+01 1.70E+03 8.44E−01 6.69E+04 6.94E−03 3.13E−03 2.68E+03 3.29E−05 1.44E−03
STD 1.56E−04 2.84E+02 2.92E+03 2.20E+01 3.13E+02 3.18E−01 5.87E+03 3.61E−03 1.30E−03 5.84E+03 8.65E−05 5.38E−04
F7 AVG 1.40E−04 1.91E−01 1.07E+01 2.91E−03 3.41E−01 1.70E−03 4.57E+01 6.62E−02 7.29E−02 4.50E+00 1.16E−03 5.24E−02
STD 1.07E−04 1.50E−01 3.05E+00 1.83E−03 1.10E−01 1.06E−03 7.82E+00 4.23E−02 2.21E−02 9.21E+00 3.63E−04 1.37E−02
F8 AVG −1.25E+04 −1.26E+04 −3.86E+03 −1.24E+04 −6.45E+03 −5.97E+03 −2.33E+03 −5.85E+03 −5.19E+19 −8.48E+03 −7.76E+03 −6.82E+03
STD 1.47E+02 4.51E+00 2.49E+02 3.50E+01 3.03E+02 7.10E+02 2.96E+02 1.16E+03 1.76E+20 7.98E+02 1.04E+03 3.94E+02
F9 AVG 0.00E+00 9.04E+00 2.87E+02 0.00E+00 1.82E+02 2.19E+00 1.92E+02 3.82E+01 1.51E+01 1.59E+02 1.40E+01 1.58E+02
STD 0.00E+00 4.58E+00 1.95E+01 0.00E+00 1.24E+01 3.69E+00 3.56E+01 1.12E+01 1.25E+00 3.21E+01 5.45E+00 1.17E+01
F10 AVG 8.88E−16 1.36E+01 1.75E+01 2.13E+00 7.14E+00 1.03E−13 1.92E+01 4.58E−02 3.29E−02 1.74E+01 6.45E−15 1.21E−02
STD 4.01E−31 1.51E+00 3.67E−01 3.53E−01 1.08E+00 1.70E−14 2.43E−01 1.20E−02 7.93E−03 4.95E+00 1.79E−15 3.30E−03
F11 AVG 0.00E+00 1.01E+01 1.70E+02 1.46E+00 1.73E+01 4.76E−03 6.01E+02 4.23E−03 4.29E−05 3.10E+01 0.00E+00 3.52E−02
STD 0.00E+00 2.43E+00 3.17E+01 1.69E−01 3.63E+00 8.57E−03 5.50E+01 1.29E−03 2.00E−05 5.94E+01 0.00E+00 7.20E−02
F12 AVG 2.08E−06 4.77E+00 1.51E+07 6.68E−01 3.05E+02 4.83E−02 4.71E+08 3.13E−04 5.57E−05 2.46E+02 7.35E−06 2.25E−03
STD 1.19E−05 1.56E+00 9.88E+06 2.62E−01 1.04E+03 2.12E−02 1.54E+08 1.76E−04 4.96E−05 1.21E+03 7.45E−06 1.70E−03
F13 AVG 1.57E−04 1.52E+01 5.73E+07 1.82E+00 9.59E+04 5.96E−01 9.40E+08 2.08E−03 8.19E−03 2.73E+07 7.89E−02 9.12E−03
STD 2.15E−04 4.52E+00 2.68E+07 3.41E−01 1.46E+05 2.23E−01 1.67E+08 9.62E−04 6.74E−03 1.04E+08 8.78E−02 1.16E−02

Table 4
Results of benchmark functions (F1–F13), with 100 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 1.91E−94 5.41E+04 1.06E+05 2.85E+03 1.39E+04 1.59E−12 2.72E+05 3.05E−01 3.17E−01 6.20E+04 3.62E−81 8.26E+03
STD 8.66E−94 1.42E+04 8.47E+03 4.49E+02 2.71E+03 1.63E−12 1.42E+04 5.60E−02 5.28E−02 1.25E+04 4.14E−81 1.32E+03
F2 AVG 9.98E−52 2.53E+02 6.06E+23 1.59E+01 1.01E+02 4.31E−08 6.00E+43 1.45E+01 4.05E+00 2.46E+02 3.27E−41 1.21E+02
STD 2.66E−51 1.41E+01 2.18E+24 3.74E+00 9.36E+00 1.46E−08 1.18E+44 6.73E+00 3.16E−01 4.48E+01 2.75E−41 2.33E+01
F3 AVG 1.84E−59 2.53E+05 4.22E+05 1.70E+05 1.89E+04 4.09E+02 1.43E+06 4.65E+04 6.88E+00 2.15E+05 4.33E−07 5.01E+05
STD 1.01E−58 5.03E+04 7.08E+04 2.02E+04 5.44E+03 2.77E+02 6.21E+05 6.92E+03 1.02E+00 4.43E+04 8.20E−07 5.87E+04
F4 AVG 8.76E−47 8.19E+01 6.07E+01 7.08E+01 3.51E+01 8.89E−01 9.41E+01 1.91E+01 2.58E−01 9.31E+01 6.36E−33 9.62E+01
STD 4.79E−46 3.15E+00 3.05E+00 4.73E+00 3.37E+00 9.30E−01 1.49E+00 3.12E+00 2.80E−02 2.13E+00 6.66E−33 1.00E+00
F5 AVG 2.36E−02 2.37E+07 2.42E+08 4.47E+05 4.64E+06 9.79E+01 1.10E+09 8.46E+02 1.33E+02 1.44E+08 9.67E+01 1.99E+07
STD 2.99E−02 8.43E+06 4.02E+07 2.05E+05 1.98E+06 6.75E−01 9.47E+07 8.13E+02 7.34E+00 7.50E+07 7.77E−01 5.80E+06
F6 AVG 5.12E−04 5.42E+04 1.07E+05 2.85E+03 1.26E+04 1.03E+01 2.69E+05 2.95E−01 2.65E+00 6.68E+04 3.27E+00 8.07E+03
STD 6.77E−04 1.09E+04 9.70E+03 4.07E+02 2.06E+03 1.05E+00 1.25E+04 5.34E−02 3.94E−01 1.46E+04 6.98E−01 1.64E+03
F7 AVG 1.85E−04 2.73E+01 3.41E+02 1.25E+00 5.84E+00 7.60E−03 3.01E+02 5.65E−01 1.21E+00 2.56E+02 1.50E−03 1.96E+01
STD 4.06E−04 4.45E+01 8.74E+01 5.18E+00 2.16E+00 2.66E−03 2.66E+01 1.64E−01 2.65E−01 8.91E+01 5.39E−04 5.66E+00
F8 AVG −4.19E+04 −4.10E+04 −7.33E+03 −3.85E+04 −1.28E+04 −1.67E+04 −4.07E+03 −1.81E+04 −2.84E+18 −2.30E+04 −1.71E+04 −1.19E+04
STD 2.82E+00 1.14E+02 4.75E+02 2.80E+02 4.64E+02 2.62E+03 9.37E+02 3.23E+03 6.91E+18 1.98E+03 3.54E+03 5.80E+02
F9 AVG 0.00E+00 3.39E+02 1.16E+03 9.11E+00 8.47E+02 1.03E+01 7.97E+02 2.36E+02 1.72E+02 8.65E+02 1.02E+01 1.03E+03
STD 0.00E+00 4.17E+01 5.74E+01 2.73E+00 4.01E+01 9.02E+00 6.33E+01 2.63E+01 9.24E+00 8.01E+01 5.57E+01 4.03E+01
F10 AVG 8.88E−16 1.82E+01 1.91E+01 5.57E+00 8.21E+00 1.20E−07 1.94E+01 9.81E−01 3.88E−01 1.99E+01 1.66E−02 1.22E+01
STD 4.01E−31 4.35E−01 2.04E−01 4.72E−01 1.14E+00 5.07E−08 6.50E−02 2.55E−01 5.23E−02 8.58E−02 9.10E−02 8.31E−01
F11 AVG 0.00E+00 5.14E+02 9.49E+02 2.24E+01 1.19E+02 4.87E−03 2.47E+03 1.19E−01 4.56E−03 5.60E+02 0.00E+00 7.42E+01
STD 0.00E+00 1.05E+02 6.00E+01 4.35E+00 2.00E+01 1.07E−02 1.03E+02 2.34E−02 9.73E−04 1.23E+02 0.00E+00 1.40E+01
F12 AVG 4.23E−06 4.55E+06 3.54E+08 3.03E+02 1.55E+05 2.87E−01 2.64E+09 4.45E+00 2.47E−02 2.82E+08 3.03E−02 3.90E+07
STD 5.25E−06 8.22E+06 8.75E+07 1.48E+03 1.74E+05 6.41E−02 2.69E+08 1.32E+00 5.98E−03 1.45E+08 1.02E−02 1.88E+07
F13 AVG 9.13E−05 5.26E+07 8.56E+08 6.82E+04 2.76E+06 6.87E+00 5.01E+09 4.50E+01 5.84E+00 6.68E+08 5.47E+00 7.19E+07
STD 1.26E−04 3.76E+07 2.16E+08 3.64E+04 1.80E+06 3.32E−01 3.93E+08 2.24E+01 1.21E+00 3.05E+08 8.34E−01 2.73E+07

versus other optimizers, Wilcoxon rank-sum test with 5% degree outperform other optimizers in all cases. As per results in Table 6,
is carefully performed here [58]. Tables 20–24 in Appendix B similarly to what we observed in lower dimensions, it is detected
show the attained p-values of the Wilcoxon rank-sum test with that the HHO has still a remarkably superior performance in
5% significance. dealing with F1–F13 test functions compared to GA, PSO, DE, BBO,
As per result in Table 3, the HHO can obtain the best re- CS, GWO, MFO, TLBO, BAT, FA, and FPA optimizers. The statistical
sults compared to other competitors on F1–F5, F7, and F9–F13 results in Table 23 also verify the significant gap between the
problems. The results of HHO are considerably better than other results of HHO and other optimizers in almost all cases. It is seen
algorithms in dealing with 84.6% of these 30-dimensional func- that the HHO has reached the best global optimum for F9 and F11
tions, demonstrating the superior performance of this optimizer.
cases in any dimension.
According to p-values in Table 20, it is detected that the observed
In order to further check the efficacy of HHO, we recorded
differences in the results are statistically meaningful for all cases.
the running time taken by optimizers to find the solutions for
From Table 4, when we have a 100-dimensional search space, the
F1–F13 problems with 1000 dimensions and the results are ex-
HHO can considerably outperform other techniques and attain
the best results for 92.3% of F1–F13 problems. It is observed posed in Table 7. As per results in Table 7, we detect that the
that the results of HHO are again remarkably better than other HHO shows a reasonably fast and competitive performance in
techniques. With regard to p-values in Table 21, it is detected that finding the best solutions compared to other well-established
the solutions of HHO are significantly better than those realized optimizers even for high dimensional unimodal and multimodal
by other techniques in almost all cases. From Table 5, we see that cases. Based on average running time on 13 problems, the HHO
the HHO can attain the best results in terms of AVG and STD in performs faster than BBO, PSO, GA, CS, GWO, and FA algorithms.
dealing with 12 test cases with 500 dimensions. By considering p- These observations are also in accordance with the computational
values in Table 22, it is recognized that the HHO can significantly complexity of HHO.
860 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

Table 5
Results of benchmark functions (F1–F13), with 500 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 1.46E−92 6.06E+05 6.42E+05 1.60E+05 8.26E+04 1.42E−03 1.52E+06 6.30E+04 6.80E+00 1.15E+06 2.14E−77 7.43E+05
STD 8.01E−92 7.01E+04 2.96E+04 9.76E+03 1.32E+04 3.99E−04 3.58E+04 8.47E+03 4.93E−01 3.54E+04 1.94E−77 3.67E+04
F2 AVG 7.87E−49 1.94E+03 6.08E+09 5.95E+02 5.13E+02 1.10E−02 8.34E+09 7.13E+02 4.57E+01 3.00E+08 2.31E−39 3.57E+09
STD 3.11E−48 7.03E+01 1.70E+10 1.70E+01 4.84E+01 1.93E−03 1.70E+10 3.76E+01 2.05E+00 1.58E+09 1.63E−39 1.70E+10
F3 AVG 6.54E−37 5.79E+06 1.13E+07 2.98E+06 5.34E+05 3.34E+05 3.37E+07 1.19E+06 2.03E+02 4.90E+06 1.06E+00 1.20E+07
STD 3.58E−36 9.08E+05 1.43E+06 3.87E+05 1.34E+05 7.95E+04 1.41E+07 1.88E+05 2.72E+01 1.02E+06 3.70E+00 1.49E+06
F4 AVG 1.29E−47 9.59E+01 8.18E+01 9.35E+01 4.52E+01 6.51E+01 9.82E+01 5.00E+01 4.06E−01 9.88E+01 4.02E−31 9.92E+01
STD 4.11E−47 1.20E+00 1.49E+00 9.05E−01 4.28E+00 5.72E+00 3.32E−01 1.73E+00 3.03E−02 4.15E−01 2.67E−31 2.33E−01
F5 AVG 3.10E−01 1.79E+09 1.84E+09 2.07E+08 3.30E+07 4.98E+02 6.94E+09 2.56E+07 1.21E+03 5.01E+09 4.97E+02 4.57E+09
STD 3.73E−01 4.11E+08 1.11E+08 2.08E+07 8.76E+06 5.23E−01 2.23E+08 6.14E+06 7.04E+01 2.50E+08 3.07E−01 1.25E+09
F6 AVG 2.94E−03 6.27E+05 6.57E+05 1.68E+05 8.01E+04 9.22E+01 1.53E+06 6.30E+04 8.27E+01 1.16E+06 7.82E+01 7.23E+05
STD 3.98E−03 7.43E+04 3.29E+04 8.23E+03 9.32E+03 2.15E+00 3.37E+04 8.91E+03 2.24E+00 3.48E+04 2.50E+00 3.28E+04
F7 AVG 2.51E−04 9.10E+03 1.43E+04 2.62E+03 2.53E+02 4.67E−02 2.23E+04 3.71E+02 8.05E+01 3.84E+04 1.71E−03 2.39E+04
STD 2.43E−04 2.20E+03 1.51E+03 3.59E+02 6.28E+01 1.12E−02 1.15E+03 6.74E+01 1.37E+01 2.24E+03 4.80E−04 2.72E+03
F8 AVG −2.09E+05 −1.31E+05 −1.65E+04 −1.42E+05 −3.00E+04 −5.70E+04 −9.03E+03 −7.27E+04 −2.10E+17 −6.29E+04 −5.02E+04 −2.67E+04
STD 2.84E+01 2.31E+04 9.99E+02 1.98E+03 1.14E+03 3.12E+03 2.12E+03 1.15E+04 1.14E+18 5.71E+03 1.00E+04 1.38E+03
F9 AVG 0.00E+00 3.29E+03 6.63E+03 7.86E+02 4.96E+03 7.84E+01 6.18E+03 2.80E+03 2.54E+03 6.96E+03 0.00E+00 7.14E+03
STD 0.00E+00 1.96E+02 1.07E+02 3.42E+01 7.64E+01 3.13E+01 1.20E+02 1.42E+02 5.21E+01 1.48E+02 0.00E+00 1.05E+02
F10 AVG 8.88E−16 1.96E+01 1.97E+01 1.44E+01 8.55E+00 1.93E−03 2.04E+01 1.24E+01 1.07E+00 2.03E+01 7.62E−01 2.06E+01
STD 4.01E−31 2.04E−01 1.04E−01 2.22E−01 8.66E−01 3.50E−04 3.25E−02 4.46E−01 6.01E−02 1.48E−01 2.33E+00 2.45E−01
F11 AVG 0.00E+00 5.42E+03 5.94E+03 1.47E+03 6.88E+02 1.55E−02 1.38E+04 5.83E+02 2.66E−02 1.03E+04 0.00E+00 6.75E+03
STD 0.00E+00 7.32E+02 3.19E+02 8.10E+01 8.17E+01 3.50E−02 3.19E+02 7.33E+01 2.30E−03 4.43E+02 0.00E+00 2.97E+02
F12 AVG 1.41E−06 2.79E+09 3.51E+09 1.60E+08 4.50E+06 7.42E−01 1.70E+10 8.67E+05 3.87E−01 1.20E+10 4.61E−01 1.60E+10
STD 1.48E−06 1.11E+09 4.16E+08 3.16E+07 3.37E+06 4.38E−02 6.29E+08 6.23E+05 2.47E−02 6.82E+08 2.40E−02 2.34E+09
F13 AVG 3.44E−04 8.84E+09 6.82E+09 5.13E+08 3.94E+07 5.06E+01 3.17E+10 2.29E+07 6.00E+01 2.23E+10 4.98E+01 2.42E+10
STD 4.75E−04 2.00E+09 8.45E+08 6.59E+07 1.87E+07 1.30E+00 9.68E+08 9.46E+06 1.13E+00 1.13E+09 9.97E−03 6.39E+09

Table 6
Results of benchmark functions (F1–F13), with 1000 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 1.06E−94 1.36E+06 1.36E+06 6.51E+05 1.70E+05 2.42E−01 3.12E+06 3.20E+05 1.65E+01 2.73E+06 2.73E−76 2.16E+06
STD 4.97E−94 1.79E+05 6.33E+04 2.37E+04 2.99E+04 4.72E−02 4.61E+04 2.11E+04 1.27E+00 4.70E+04 7.67E−76 3.39E+05
F2 AVG 2.52E−50 4.29E+03 1.79E+10 1.96E+03 8.34E+02 7.11E−01 1.79E+10 1.79E+10 1.02E+02 1.79E+10 1.79E+10 1.79E+10
STD 5.02E−50 8.86E+01 1.79E+10 2.18E+01 8.96E+01 4.96E−01 1.79E+10 1.79E+10 3.49E+00 1.79E+10 1.79E+10 1.79E+10
F3 AVG 1.79E−17 2.29E+07 3.72E+07 9.92E+06 1.95E+06 1.49E+06 1.35E+08 4.95E+06 8.67E+02 1.94E+07 8.61E−01 5.03E+07
STD 9.81E−17 3.93E+06 1.16E+07 1.48E+06 4.20E+05 2.43E+05 4.76E+07 7.19E+05 1.10E+02 3.69E+06 1.33E+00 4.14E+06
F4 AVG 1.43E−46 9.79E+01 8.92E+01 9.73E+01 5.03E+01 7.94E+01 9.89E+01 6.06E+01 4.44E−01 9.96E+01 1.01E−30 9.95E+01
STD 7.74E−46 7.16E−01 2.39E+00 7.62E−01 5.37E+00 2.77E+00 2.22E−01 2.69E+00 2.24E−02 1.49E−01 5.25E−31 1.43E−01
F5 AVG 5.73E−01 4.73E+09 3.72E+09 1.29E+09 7.27E+07 1.06E+03 1.45E+10 2.47E+08 2.68E+03 1.25E+10 9.97E+02 1.49E+10
STD 1.40E+00 9.63E+08 2.76E+08 6.36E+07 1.84E+07 3.07E+01 3.20E+08 3.24E+07 1.27E+02 3.15E+08 2.01E−01 3.06E+08
F6 AVG 3.61E−03 1.52E+06 1.38E+06 6.31E+05 1.60E+05 2.03E+02 3.11E+06 3.18E+05 2.07E+02 2.73E+06 1.93E+02 2.04E+06
STD 5.38E−03 1.88E+05 6.05E+04 1.82E+04 1.86E+04 2.45E+00 6.29E+04 2.47E+04 4.12E+00 4.56E+04 2.35E+00 2.46E+05
F7 AVG 1.41E−04 4.45E+04 6.26E+04 3.84E+04 1.09E+03 1.47E−01 1.25E+05 4.44E+03 4.10E+02 1.96E+05 1.83E−03 2.27E+05
STD 1.63E−04 8.40E+03 4.16E+03 2.91E+03 3.49E+02 3.28E−02 3.93E+03 4.00E+02 8.22E+01 6.19E+03 5.79E−04 3.52E+04
F8 AVG −4.19E+05 −1.94E+05 −2.30E+04 −2.29E+05 −4.25E+04 −8.64E+04 −1.48E+04 −1.08E+05 −9.34E+14 −9.00E+04 −6.44E+04 −3.72E+04
STD 1.03E+02 9.74E+03 1.70E+03 3.76E+03 1.47E+03 1.91E+04 3.14E+03 1.69E+04 2.12E+15 7.20E+03 1.92E+04 1.23E+03
F9 AVG 0.00E+00 8.02E+03 1.35E+04 2.86E+03 1.01E+04 2.06E+02 1.40E+04 7.17E+03 6.05E+03 1.56E+04 0.00E+00 1.50E+04
STD 0.00E+00 3.01E+02 1.83E+02 9.03E+01 1.57E+02 4.81E+01 1.85E+02 1.88E+02 1.41E+02 1.94E+02 0.00E+00 1.79E+02
F10 AVG 8.88E−16 1.95E+01 1.98E+01 1.67E+01 8.62E+00 1.88E−02 2.07E+01 1.55E+01 1.18E+00 2.04E+01 5.09E−01 2.07E+01
STD 4.01E−31 2.55E−01 1.24E−01 8.63E−02 9.10E−01 2.74E−03 2.23E−02 2.42E−01 5.90E−02 2.16E−01 1.94E+00 1.06E−01
F11 AVG 0.00E+00 1.26E+04 1.23E+04 5.75E+03 1.52E+03 6.58E−02 2.83E+04 2.87E+03 3.92E−02 2.47E+04 1.07E−16 1.85E+04
STD 0.00E+00 1.63E+03 5.18E+02 1.78E+02 2.66E+02 8.82E−02 4.21E+02 1.78E+02 3.58E−03 4.51E+02 2.03E−17 2.22E+03
F12 AVG 1.02E−06 1.14E+10 7.73E+09 1.56E+09 8.11E+06 1.15E+00 3.63E+10 6.76E+07 6.53E−01 3.04E+10 6.94E−01 3.72E+10
STD 1.16E−06 1.27E+09 6.72E+08 1.46E+08 3.46E+06 1.82E−01 1.11E+09 1.80E+07 2.45E−02 9.72E+08 1.90E−02 7.67E+08
F13 AVG 8.41E−04 1.91E+10 1.58E+10 4.17E+09 8.96E+07 1.21E+02 6.61E+10 4.42E+08 1.32E+02 5.62E+10 9.98E+01 6.66E+10
STD 1.18E−03 4.21E+09 1.56E+09 2.54E+08 3.65E+07 1.11E+01 1.40E+09 7.91E+07 1.48E+00 1.76E+09 1.31E−02 2.26E+09

The results in Table 8 verify that HHO provides superior and 4.6.1. Three-bar truss design problem
very competitive results on F14–F23 fixed dimension MM test This problem can be regarded as one of the most studied
cases. The results on F16–F18 are very competitive and all al- cases in previous works [63]. This problem can be described
gorithms have attained high-quality results. Based on results in mathematically as follows:
Table 8, the proposed HHO has always achieved to the best re- −

sults on F14–F23 problems in comparison with other approaches. Consider X = [x1 x2 ] = [A1 A2 ],

→ ( √ )
Based on results for F24–F29 hybrid CM functions in Table 8,
Minimize f ( X ) = 2 2X1 + X2 × 1,
the HHO is capable of achieving to high-quality solutions and

outperforming other competitors. The p-values in Table 24 also −
→ 2x1 + x2
confirm the meaningful advantage of HHO compared to other Subject to g1 ( X ) = √ P − σ ≤ 0,
optimizers for the majority of cases. 2x21 + 2x1 x2

→ x2
g2 ( X ) = √ P − σ ≤ 0,
4.6. Engineering benchmark sets 2x21 + 2x1 x2

→ 1
In this section, the proposed HHO is applied to six well-known g3 ( X ) = √ P − σ ≤ 0,
benchmark engineering problems. Tackling engineering design 2x2 + x1
tasks using P-metaheuristics is a well-regarded research direction Variable range 0 ≤ x1 , x2 ≤ 1,
in the previous works [61,62]. The results of HHO is compared
where 1 = 100 cm, P = 2 KN/cm2 , σ = 2 KN/cm2
to various conventional and modified optimizers proposed in
previous studies. Table 9 tabulates the details of the tackled Fig. 13 demonstrates the shape of the formulated truss and the
engineering design tasks. related forces on this structure. With regard to Fig. 13 and the
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 861

Fig. 12. Scalability results of the HHO versus other methods in dealing with the F1–F13 cases with different dimensions.

formulation, we have two parameters: the area of bars 1 and 3 total weight of the structure. In addition, this design case has
and area of bar 2. The objective of this task is to minimize the several constraints including stress, deflection, and buckling.
862 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

Table 7
Comparison of average running time results (seconds) over 30 runs for larger-scale problems with 1000 variables.
ID Metric HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 AVG 2.03E+00 8.27E+01 8.29E+01 1.17E+02 2.13E+00 4.47E+00 1.60E+00 5.62E+00 5.47E+00 3.23E+00 2.21E+00 2.38E+00
STD 4.04E−01 5.13E+00 4.04E+00 6.04E+00 2.62E−01 2.64E−01 2.08E−01 4.42E−01 4.00E−01 2.06E−01 3.62E−01 2.70E−01
F2 AVG 1.70E+00 8.41E+01 8.28E+01 1.16E+02 2.09E+00 4.37E+00 1.61E+00 2.57E+00 5.50E+00 3.25E+00 1.99E+00 2.28E+00
STD 7.37E−02 4.65E+00 4.08E+00 6.28E+00 8.64E−02 1.29E−01 1.02E−01 3.93E−01 3.48E−01 1.56E−01 1.19E−01 1.16E−01
F3 AVG 1.17E+02 1.32E+02 1.30E+02 1.65E+02 5.10E+01 5.20E+01 5.23E+01 3.70E+01 1.02E+02 5.11E+01 9.76E+01 5.04E+01
STD 5.28E+00 5.68E+00 5.73E+00 7.56E+00 2.01E+00 1.93E+00 2.25E+00 1.49E+00 3.73E+00 2.00E+00 3.87E+00 1.98E+00
F4 AVG 2.05E+00 8.14E+01 8.24E+01 1.18E+02 1.90E+00 4.27E+00 1.44E+00 5.43E+00 5.14E+00 3.14E+00 1.87E+00 2.21E+00
STD 7.40E−02 3.73E+00 3.91E+00 5.48E+00 5.83E−02 1.36E−01 1.02E−01 2.76E−01 2.33E−01 9.28E−02 1.05E−01 8.73E−02
F5 AVG 2.95E+00 8.16E+01 8.33E+01 1.17E+02 2.04E+00 4.46E+00 1.65E+00 5.61E+00 5.49E+00 3.31E+00 2.23E+00 2.38E+00
STD 8.36E−02 4.13E+00 4.36E+00 5.91E+00 7.79E−02 1.39E−01 1.16E−01 3.01E−01 2.74E−01 1.27E−01 1.09E−01 1.30E−01
F6 AVG 2.49E+00 8.08E+01 8.26E+01 1.17E+02 1.88E+00 4.29E+00 1.47E+00 5.51E+00 5.17E+00 3.13E+00 1.89E+00 2.19E+00
STD 8.25E−02 3.96E+00 3.95E+00 5.69E+00 4.98E−02 1.07E−01 1.03E−01 2.87E−01 2.35E−01 1.00E−01 9.33E−02 1.02E−01
F7 AVG 8.20E+00 8.26E+01 8.52E+01 1.18E+02 4.79E+00 7.08E+00 4.22E+00 6.89E+00 1.08E+01 5.83E+00 7.23E+00 4.95E+00
STD 1.69E−01 4.56E+00 3.94E+00 6.10E+00 1.02E−01 7.56E−02 8.98E−02 2.02E−01 3.86E−01 1.01E−01 1.31E−01 1.43E−01
F8 AVG 4.86E+00 8.47E+01 8.36E+01 1.18E+02 3.18E+00 5.21E+00 2.45E+00 6.04E+00 7.69E+00 4.05E+00 3.84E+00 3.23E+00
STD 1.03E+00 3.68E+00 3.80E+00 5.52E+00 4.73E−01 1.78E−01 2.88E−01 2.69E−01 3.86E−01 1.20E−01 4.12E−01 8.69E−02
F9 AVG 3.77E+00 8.09E+01 8.33E+01 1.15E+02 2.84E+00 4.72E+00 2.33E+00 5.89E+00 6.90E+00 3.94E+00 2.70E+00 3.20E+00
STD 8.87E−01 3.59E+00 3.88E+00 5.94E+00 4.30E−01 1.19E−01 2.88E−01 2.55E−01 3.34E−01 1.26E−01 4.71E−01 5.50E−01
F10 AVG 3.75E+00 8.24E+01 8.36E+01 1.17E+02 2.96E+00 4.80E+00 2.46E+00 5.98E+00 6.56E+00 4.04E+00 2.84E+00 3.41E+00
STD 8.75E−01 4.02E+00 3.99E+00 5.90E+00 3.74E−01 1.14E−01 4.67E−01 2.91E−01 3.51E−01 1.21E−01 5.39E−01 3.01E−01
F11 AVG 4.17E+00 8.23E+01 8.38E+01 1.18E+02 3.16E+00 4.95E+00 2.61E+00 6.03E+00 6.43E+00 4.22E+00 3.03E+00 3.38E+00
STD 5.56E−01 4.41E+00 3.97E+00 6.02E+00 5.50E−01 8.65E−02 3.95E−01 2.50E−01 3.01E−01 1.20E−01 3.95E−01 9.95E−02
F12 AVG 1.90E+01 8.64E+01 8.85E+01 1.23E+02 9.09E+00 1.06E+01 8.66E+00 9.17E+00 1.90E+01 9.67E+00 1.53E+01 9.14E+00
STD 3.31E+00 4.47E+00 4.42E+00 6.20E+00 1.39E+00 4.33E−01 1.47E+00 3.62E−01 3.53E+00 4.04E−01 2.54E+00 1.14E+00
F13 AVG 1.89E+01 8.64E+01 8.90E+01 1.23E+02 9.28E+00 1.05E+01 8.74E+00 9.24E+00 1.83E+01 9.66E+00 1.46E+01 9.34E+00
STD 1.56E+00 4.40E+00 4.20E+00 6.29E+00 1.50E+00 4.56E−01 1.38E+00 3.94E−01 7.75E−01 3.91E−01 2.24E+00 1.24E+00

Table 8
Results of benchmark functions (F14–F29).
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F14 AVG 9.98E−01 9.98E−01 1.39E+00 9.98E−01 9.98E−01 4.17E+00 1.27E+01 3.51E+00 1.27E+01 2.74E+00 9.98E−01 1.23E+00
STD 9.23E−01 4.52E−16 4.60E−01 4.52E−16 2.00E−04 3.61E+00 6.96E+00 2.16E+00 1.81E−15 1.82E+00 4.52E−16 9.23E−01
F15 AVG 3.10E−04 3.33E−02 1.61E−03 1.66E−02 6.88E−04 6.24E−03 3.00E−02 1.01E−03 3.13E−04 2.35E−03 1.03E−03 5.63E−04
STD 1.97E−04 2.70E−02 4.60E−04 8.60E−03 1.55E−04 1.25E−02 3.33E−02 4.01E−04 2.99E−05 4.92E−03 3.66E−03 2.81E−04
F16 AVG −1.03E+00 −3.78E−01 −1.03E+00 −8.30E−01 −1.03E+00 −1.03E+00 −6.87E−01 −1.03E+00 −1.03E+00 −1.03E+00 −1.03E+00 −1.03E+00
STD 6.78E−16 3.42E−01 2.95E−03 3.16E−01 6.78E−16 6.78E−16 8.18E−01 6.78E−16 6.78E−16 6.78E−16 6.78E−16 6.78E−16
F17 AVG 3.98E−01 5.24E−01 4.00E−01 5.49E−01 3.98E−01 3.98E−01 3.98E−01 3.98E−01 3.98E−01 3.98E−01 3.98E−01 3.98E−01
STD 2.54E−06 6.06E−02 1.39E−03 6.05E−02 1.69E−16 1.69E−16 1.58E−03 1.69E−16 1.69E−16 1.69E−16 1.69E−16 1.69E−16
F18 AVG 3.00E+00 3.00E+00 3.10E+00 3.00E+00 3.00E+00 3.00E+00 1.47E+01 3.00E+00 3.00E+00 3.00E+00 3.00E+00 3.00E+00
STD 0.00E+00 0.00E+00 7.60E−02 0.00E+00 0.00E+00 4.07E−05 2.21E+01 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F19 AVG −3.86E+00 −3.42E+00 −3.86E+00 −3.78E+00 −3.86E+00 −3.86E+00 −3.84E+00 −3.86E+00 −3.86E+00 −3.86E+00 −3.86E+00 −3.86E+00
STD 2.44E−03 3.03E−01 1.24E−03 1.26E−01 3.16E−15 3.14E−03 1.41E−01 3.16E−15 3.16E−15 1.44E−03 3.16E−15 3.16E−15
F20 AVG −3.322 −1.61351 −3.11088 −2.70774 −3.2951 −3.25866 −3.2546 −3.28105 −3.322 −3.23509 −3.24362 −3.27048
STD 0.137406 0.46049 0.029126 0.357832 0.019514 0.064305 0.058943 0.063635 1.77636E−15 0.064223 0.15125 0.058919
F21 AVG −10.1451 −6.66177 −4.14764 −8.31508 −5.21514 −8.64121 −4.2661 −7.67362 −5.0552 −6.8859 −8.64525 −9.64796
STD 0.885673 3.732521 0.919578 2.883867 0.008154 2.563356 2.554009 3.50697 1.77636E−15 3.18186 1.76521 1.51572
F22 AVG −10.4015 −5.58399 −6.01045 −9.38408 −5.34373 −10.4014 −5.60638 −9.63827 −5.0877 −8.26492 −10.2251 −9.74807
STD 1.352375 2.605837 1.962628 2.597238 0.053685 0.000678 3.022612 2.293901 8.88178E−16 3.076809 0.007265 1.987703
F23 AVG −10.5364 −4.69882 −4.72192 −6.2351 −5.29437 −10.0836 −3.97284 −9.75489 −5.1285 −7.65923 −10.0752 −10.5364
STD 0.927655 3.256702 1.742618 3.78462 0.356377 1.721889 3.008279 2.345487 1.77636E−15 3.576927 1.696222 8.88E−15
F24 AVG 396.8256 626.8389 768.1775 493.0129 518.7886 486.5743 1291.474 471.9752 469.0141 412.4627 612.5569 431.0767
STD 79.58214 101.2255 76.09641 102.6058 47.84199 142.9028 150.4189 252.1018 60.62538 68.38819 123.2403 64.1864
F25 AVG 910 999.4998 1184.819 935.4693 1023.799 985.4172 1463.423 953.8902 910.1008 947.9322 967.088 917.6204
STD 0 29.44366 33.02676 9.61349 31.85965 29.95368 68.41612 11.74911 0.036659 27.06628 27.39906 1.052473
F26 AVG 910 998.9091 1178.34 934.2718 1018.002 973.5362 1480.683 953.5493 910.1252 940.1221 983.774 917.346
STD 0 25.27817 35.20755 8.253209 34.87908 22.45008 45.55006 14.086 0.047205 21.68256 45.32275 0.897882
F27 AVG 910 1002.032 1195.088 939.7644 1010.392 969.8538 1477.919 947.7667 910.1233 945.4266 978.7344 917.3067
STD 0 26.66321 23.97978 23.07814 31.51188 19.51721 60.58827 11.18408 0.049732 26.79031 38.22729 0.861945
F28 AVG 860.8925 1512.467 1711.981 1068.631 1539.357 1337.671 1961.526 1016.389 1340.078 1455.918 1471.879 1553.993
STD 0.651222 94.64553 35.18377 201.9045 42.93441 191.0662 58.46188 270.6854 134.183 36.06884 268.6238 96.35255
F29 AVG 558.9653 1937.396 2101.145 1897.439 2033.614 1909.091 2221.404 1986.206 1903.852 1882.974 1883.773 1897.031
STD 5.112352 11.25913 29.74533 8.823239 30.2875 6.567542 35.54849 18.88722 185.7944 6.528261 3.493192 4.203909

Table 9
Brief description of the tackled engineering design tasks. (D: dimension, CV: continuous variables, DV:Discrete variables, NC: Number of constraints, AC: Active
constraints, F/S: ratio of the feasible solutions in the solution domain (F) to the whole search domain(S), OB: Objective.).
No. Name D CV DV NC AC F/S OB
1 Three-bar truss 2 2 0 3 NA NA Minimize weight
2 Tension/compression spring 3 3 0 4 2 0.01 Minimize weight
3 Pressure vessel 4 2 2 4 2 0.40 Minimize cost
4 Welded beam 4 4 0 7 2 0.035 Minimize cost
5 Multi-plate disc clutch brake 5 0 5 8 1 0.700 Minimize weight
6 Rolling element bearing 10 9 1 9 4 0.015 Maximize dynamic load
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 863

Table 10
Comparison of results for three-bar truss design problem.
Algorithm Optimal values for variables Optimal weight
x1 x2
HHO 0.788662816 0.408283133832900 263.8958434
DEDS [64] 0.78867513 0.40824828 263.8958434
MVO [65] 0.78860276 0.408453070000000 263.8958499
GOA [63] 0.788897555578973 0.407619570115153 263.895881496069
MFO [57] 0.788244771 0.409466905784741 263.8959797
PSO–DE [66] 0.7886751 0.4082482 263.8958433
SSA [61] 0.788665414 0.408275784444547 263.8958434
MBA [67] 0.7885650 0.4085597 263.8958522
Tsa [68] 0.788 0.408 263.68
Ray and Sain [69] 0.795 0.395 264.3
CS [34] 0.78867 0.40902 263.9716

Table 11
Comparison of results for tension/compression spring problem.
Algorithms d D N Optimal cost
HHO 0.051796393 0.359305355 11.138859 0.012665443
SSA [61] 0.051207 0.345215 12.004032 0.0126763
TEO [70] 0.051775 0.3587919 11.16839 0.012665
MFO [57] 0.051994457 0.36410932 10.868422 0.0126669
SFS [71] 0.051689061 0.356717736 11.288966 0.012665233
GWO [56] 0.05169 0.356737 11.28885 0.012666
WOA [18] 0 .051207 0 .345215 12 .004032 0 .0126763
Arora [72] 0.053396 0.399180 9.185400 0.012730
GA2 [73] 0.051480 0.351661 11.632201 0.012704
GA3 [74] 0.051989 0.363965 10.890522 0.012681
Belegundu [75] 0.05 0.315900 14.250000 0.012833
CPSO [76] 0.051728 0.357644 11.244543 0.012674
DEDS [64] 0.051689 0.356717 11.288965 0.012665
GSA [25] 0.050276 0.323680 13.525410 0.012702
DELC [77] 0.051689 0.356717 11.288965 0.012665
HEAA [78] 0.051689 0.356729 11.288293 0.012665
WEO [79] 0.051685 0.356630 11.294103 0.012665
BA [80] 0.05169 0.35673 11.2885 0.012665
ESs [81] 0.051643 0.355360 11.397926 0.012698
Rank-iMDDE [82] 0.051689 0.35671718 11.288999 0.012665
CWCA [14] 0.051709 0.35710734 11.270826 0.012672
WCA [62] 0.05168 0.356522 11.30041 0.012665

other optimizers significantly. The results obtained show that the


HHO is capable of dealing with a constrained space.

4.6.2. Tension/compression spring design


In this case, our intention is to minimize the weight of a spring.
Design variables for this case are wire diameter (d), mean coil
diameter (D), and the number of active coils (N). For this case,
the constraints on shear stress, surge frequency, and minimum
deflection should be satisfied during the weight optimization. The
objective and constraints of this problem can be formulated as
follows:


Consider z = [z1 z2 z3 ] = [dDN ],


Minimizef ( z ) = (z3 + 2)z2 z12 ,
Fig. 13. Three-bar truss design problem.
Subject to

→ z23 z3
g1 ( z ) = 1 − ≤ 0,
The HHO is applied to this case based on 30 independent runs 71785z14
with 30 hawks and 500 iterations in each run. Since this bench- 4z22 − z1 z2 1


g2 ( z ) = + ≤ 0,
mark case has some constraints, we need to integrate the HHO
with a constraint handling technique. For the sake of simplicity, 12566(z2 z13 − z14 ) 5108z12
we used a barrier penalty approach [83] in the HHO. The results −
→ 140.45z1
g3 ( z ) = 1 − ≤0
of HHO are compared to those reported for DEDS [64], MVO [65], z22 z3
GOA [63], MFO [57], PSO–DE [66], SSA [61], MBA [67], Tsa [68], −
→ z1 + z2
Ray and Sain [69], and CS [34] in previous literature. Table 10 g4 ( z ) = − 1 ≤ 0,
1.5
shows the detailed results of the proposed HHO compared to
other techniques. Based on the results in Table 10, it is observed There are several optimizers previously applied to this case
that HHO can reveal very competitive results compared to DEDS, such as the SSA [61], TEO [70], MFO [57], SFS [71], GWO [56],
PSO–DE, and SSA algorithms. Additionally, the HHO outperforms WOA [18], method presented by Arora [72], GA2 [73], GA3 [74],
864 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

Fig. 14. Pressure vessel problem.

method presented by Belegundu [75], CPSO [76], DEDS [64],


GSA [25], DELC [77], HEAA [78], WEO [79], BA [80], ESs [81], Rank-
iMDDE [82], CWCA [14], and WCA [62]. The results of HHO are
compared to the aforementioned techniques in Table 11.
Table 11 shows that the proposed HHO can achieve to high
quality solutions very effectively when tackling this benchmark
problem and it exposes the best design. It is evident that results
Fig. 15. Welded beam design problem.
of HHO are very competitive to those of SFS and TEO.

4.6.3. Pressure vessel design problem −


→ −

g2 ( z ) = σ ( z ) − σmax ≤ 0,
In this well-regarded case, we minimize the fabrication cost

→ −

g3 ( z ) = δ ( z ) − δmax ≤ 0,
and it has four parameters and constraints. The variables of this
case are (x1 - x4 ): Ts (x1 , thickness of the shell), Th (x2 , thickness of −

g4 ( z ) = z1 − z4 ≤ 0,
the head), r (x3 , inner radius), L (x4 , length of the section without −
→ −

the head). The overall configuration of this problem is shown in g ( z ) = P − P ( z ) ≤ 0,
5 c

Fig. 14.


g6 ( z ) = 0.125 − z1 ≤ 0,
The formulation of this test case is as follows: −

g7 ( z ) = 1.10471z12 + 0.04811z3 z4 (14.0 + z2 ) − 5.0 ≤ 0,


Consider z = [z1 z2 z3 z4 ] = [Ts Th RL], Variable range


Minimizef ( z ) = 0.6224z1 z3 z4 + 1.7781z2 z23 0.05 ≤ z1 ≤ 2.00, 0.25 ≤ z2 ≤ 1.30, 2.00 ≤ z3 ≤ 15.0,
+ 3.1661z12 z4 + 19. 84z12 z3 , where

Subject to z2 P

→ τ (−

z )= τ ′2 + 2τ ′ τ ′′ + τ ′′ 2 , τ ′ = √ ,
g1 ( z ) = −z1 + 0.0193z3 ≤ 0, 2R 2z1 z2


g ( z ) = −z + 0.00954z ≤ 0, MR ( z2 )
2 3 3 τ ′′ = ,M = P L + ,
4 J 2


g3 ( z ) = −Π z32 z4 − Π z33 + 1, 296, 000 ≤ 0, √
3 z22 √
[ 2
z1 + z3 2 z1 + z3 2
{ ]}
z2


g4 ( z ) = z4 − 240 ≤ 0, R= +( ) ,J = 2 2z1 z2 +( ) ,
4 2 12 2
The design space for this case is limited to: 0 ≤ z1 , z2 ≤ 99, 6PL
σ (−

z )= ,
0 ≤ z3 , z4 ≤ 200. The results of HHO are compared to those of 2
z4 z3
GWO [56], GA [73], HPSO [84], G-QPSO [85], WEO [79], IACO [86], √
z32 z46 ( )
4.013E

BA [80], MFO [57], CSS [87], ESs [81], CPSO [76], BIANCA [88], 4PL3 z3 E
MDDE [89], DELC [77], WOA [18], GA3 [74], Lagrangian multi- δ (−

z )= , P c (−

z )=
36
1− ,
Ez33 z4 L2 2L 4G
plier (Kannan) [18], and Branch-bound (Sandgren) [18]. Table 12
reports the optimum designs attained by HHO and listed opti- P = 6000lb, L = 14in, E = 30 × 106 psi, G = 12 × 106 psi,
mizers. Inspecting the results in Table 12, we detected that the
The optimal results of HHO versus those attained by RAN-
HHO is the best optimizer in dealing with problems and can attain
DOM [90], DAVID [90], SIMPLEX [90], APPROX [90], GA1 [73],
superior results compared to other techniques.
GA2 [83], HS [91], GSA [18], ESs [81], and CDE [92] are repre-
sented in Table 13. From Table 13, it can be seen that the pro-
4.6.4. Welded beam design problem
posed HHO can reveal the best design settings with the minimum
Purpose of the well-known engineering case is to discover
fitness value compared to other optimizers.
the best manufacturing cost with regard to a series of design
constraints. A schematic view of this problem is illustrated in
4.6.5. Multi-plate disc clutch brake
Fig. 15. The design variables are thickness of weld (h), length (l),
In this discrete benchmark task, the intention is to optimize
height (t), and thickness of the bar (b).
the total weight of a multiple disc clutch brake with regard to
This case can be formulated as follows:
five variables: actuating force, inner and outer radius, number of


Consider z = [z1 , z2 , z3 , z4 ] = [h, l, t , b], friction surfaces, and thickness of discs [94].


Minimizef ( z ) = 1.10471z12 z2 + 0.04811z3 z4 (14.0 + z2 ), This problem has eight constraints according to the conditions
of geometry and operating requirements. The feasible area for
Subject to this case includes practically 70% of the solution space. However,

→ −

g1 ( z ) = τ ( z ) − τmax ≤ 0, there are few works that considered this problem in their tests.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 865

Table 12
Comparison of results for pressure vessel design problem.
Algorithms Ts (x1 ) Th (x2 ) R(x3 ) L(x4 ) Optimal cost
HHO 0.81758383 0.4072927 42.09174576 176.7196352 6000.46259
GWO [56] 0.8125 0.4345 42.089181 176.758731 6051.5639
GA [73] 0.812500 0.437500 42.097398 176.654050 6059.9463
HPSO [84] 0.812500 0.437500 42.0984 176.6366 6059.7143
G-QPSO [85] 0.812500 0.437500 42.0984 176.6372 6059.7208
WEO [79] 0.812500 0.437500 42.098444 176.636622 6059.71
IACO [86] 0.812500 0.437500 42.098353 176.637751 6059.7258
BA [80] 0.812500 0.437500 42.098445 176.636595 6059.7143
MFO [57] 0.8125 0.4375 42.098445 176.636596 6059.7143
CSS [87] 0.812500 0.437500 42.103624 176.572656 6059.0888
ESs [81] 0.812500 0.437500 42.098087 176.640518 6059.7456
CPSO [76] 0.812500 0.437500 42.091266 176.746500 6061.0777
BIANCA [88] 0.812500 0.437500 42.096800 176.6580 0 0 6059.9384
MDDE [89] 0.812500 0.437500 42.098446 176.636047 6059.701660
DELC [77] 0.812500 0.437500 42.0984456 176.6365958 6059.7143
WOA [18] 0 .812500 0 .437500 42 .0982699 176 .638998 6059 .7410
GA3 [74] 0.812500 0.437500 42.0974 176.6540 6059.9463
Lagrangian multiplier (Kannan) [18] 1.125000 0.625000 58.291000 43.6900000 7198 .0428
Branch-bound (Sandgren) [18] 1.125000 0.625000 47.700000 117.701000 8129.1036

Table 13
Comparison of results for welded beam design problem.
Algorithm h l t b Optimal cost
HHO 0.204039 3.531061 9.027463 0.206147 1.73199057
RANDOM [90] 0.4575 4.7313 5.0853 0.66 4.1185
DAVID [90] 0.2434 6.2552 8.2915 0.2444 2.3841
SIMPLEX [90] 0.2792 5.6256 7.7512 0.2796 2.5307
APPROX [90] 0.24 4 4 6.2189 8.2915 0.2444 2.3815
GA1 [73] 0.248900 6.173000 8.178900 0.253300 2.433116
GA2 [83] 0.208800 3.420500 8.997500 0.210000 1.748310
HS [91] 0.2442 6.2231 8.2915 0.2443 2.3807
GSA [18] 0.182129 3.856979 10 0.202376 1.879952
ESs [81] 0.199742 3.61206 9.0375 0.206082 1.7373
CDE [92] 0.203137 3.542998 9.033498 0.206179 1.733462

Table 14
Comparison of results for multi-plate disc clutch brake.
Algorithm ri r0 t F Z Optimal cost
HHO 69.9999999992493 90 1 1000 2.312781994 0.259768993
TLBO [93] 70 90 1 810 3 0.313656
WCA [62] 70 90 1 910 3 0.313656
PVS [94] 70 90 1 980 3 0.31366

The optimal results of proposed HHO in compared to those re- 2Π n(ro3 − ri3 ) Iz Π n
vrz = ( ) , T=
vealed by TLBO [93], WCA [62], and PVS [94] algorithms. Table 14 90 ro2 − ri2 30(Mh + Mf )
shows the attained results of different optimizers for this test ∆r = 20 mm, Iz = 55 kgmm2 , Pmax = 1 MPa, Fmax = 1000 N,
case. From Table 14, we can recognize that the HHO attains the
Tmax = 15 s, µ = 0.5, s = 1.5, Ms = 40 Nm,
best rank and can outperform the well-known TLBO, WCA, and
PVS in terms of quality of solutions. Mf = 3 Nm, n = 250 rpm,
vsr max = 10 m/s, lmax = 30 mm, ri min = 60,
f (x) = Π (ro2 − ri2 )t(Z + 1)ρ
ri max = 80, ro min = 90,
subject to:
ro max = 110, tmin = 1.5, tmax = 3, Fmin = 600,
g1 (x) = ro − ri − ∆r ≥ 0
Fmax = 1000, Zmin = 2, Zmax = 9,
g2 (x) = lmax − (Z + 1)(t + δ ) ≥ 0
g3 (x) = Pmax − Prz ≥ 0
4.6.6. Rolling element bearing design problem
g4 (x) = Pmax vsr max − Prz νsr ≥ 0 This engineering problem has 10 geometric variables, nine
g5 (x) = vsr max − vsr ≥ 0 constraints considered for assembly and geometric-based restric-
tions and our purpose for tackling this case is to optimize (maxi-
g6 = Tmax − T ≥ 0
mize) the dynamic load carrying capacity. The formulation of this
g7 (x) = Mh − sMs ≥ 0 test case is described as follows:
g8 (x) = T ≥ 0
Maximize Cd = fc Z 2/3 D1b.8 ifD ≤ 25.4mm
where,
Cd = 3.647fc Z 2/3 D1b.4 ifD > 25.4mm
2 ro3 − ri2 F
Mh = µFZ , Prz = ,
3 3
ro2 − ri Π (ro2 − ri2 ) Subject to
866 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

This case covers closely 1.5% of the feasible area of the target
space. The results of HHO is compared to GA4 [95], TLBO [93],
and PVS [94] techniques. Table 15 tabulates the results of HHO
versus those of other optimizers. From Table 15, we see that the
proposed HHO has detected the best solution with the maximum
cost with a substantial progress compared to GA4, TLBO, and PVS
algorithms.

5. Discussion on results

As per results in previous sections, we can recognize that the


HHO shows significantly superior results for multi-dimensional
F1–F13 problems and F14–F29 test cases compared to other well-
established optimizers such as GA, PSO, BBO, DE, CS, GWO, MFO,
FPA, TLBO, BA, and FA methods. While the efficacy of methods
such as PSO, DE, MFO, and GA significantly degrade by increasing
Fig. 16. Rolling element bearing problem. the dimensions, the scalability results in Fig. 12 and Table 2
expose that HHO is able to maintain a well equilibrium among the
exploratory and exploitative propensities on problem’s topogra-
φ0 phies with many variables. If we observe the results of F1–F7 in


g1 ( z ) = − Z + 1 ≤ 0, Tables 3–6, there is a big, significant gap between the results of
2 sin−1 (Db /Dm ) several methods such as the GA, PSO, DE, BBO, GWO, FPA, FA, and


g2 ( z ) = 2Db − KD min (D − d) > 0, BA, with high-quality solutions found by HHO. This observation
confirms the advanced exploitative merits of the proposed HHO.


g3 ( z ) = KD max (D − d) − 2Db ≥ 0, Based on the solution found for multimodal and hybrid compo-

→ sition landscapes in Table 8, we detect that HHO finds superior
g4 ( z ) = ζ Bw − Db ≤ 0, and competitive solutions based on a stable balance between

→ the diversification and intensification inclinations and a smooth
g5 ( z ) = Dm − 0.5(D + d) ≥ 0,
transition between the searching modes. The results also support


g6 ( z ) = (0.5 + e)(D + d) − Dm ≥ 0, the superior exploratory strengths of the HHO. The results for
six well-known constrained cases in Tables 10–15 also disclose


g7 ( z ) = 0.5(D − Dm − Db ) − ϵ Db ≥ 0, that HHO obtains the best solutions and it is one of the top

→ optimizers compared to many state-of-the-art techniques. The
g8 ( z ) = fi ≥ 0.515, results highlight that the proposed HHO has several exploratory

→ and exploitative mechanisms and consequently, it has efficiently
g9 ( z ) = fo ≥ 0.515,
avoided LO and immature convergence drawbacks when solving
where different classes of problems and in the case of any LO stagnation,
⎤−0.3 the proposed HHO has shown a higher potential in jumping out
)0.41 }10/3
⎡ { )1.72 ( of local optimum solutions.
1−γ fi (2fo − 1)
(
fc = 37.91 ⎣1 + 1.04 ⎦ The following features can theoretically assist us in realiz-
1+γ fo (2fi − 1) ing why the proposed HHO can be beneficial in exploring or
exploiting the search space of a given optimization problem:
]0.41
γ 0.3 (1 − γ )1.39
[ ][
2fi
× • Escaping energy E parameter has a dynamic randomized
(1 + γ )1/3 2fi − 1 time-varying nature, which can further boost the explo-
ration and exploitation patterns of HHO. This factor also
x = {(D − d)/2 − 3 (T /4)}2
[
requires HHO to perform a smooth transition between ex-
+ {D/2 − T /4 − Db }2 − {d/2 + T /4}2 ploration and exploitation.
]
• Different diversification mechanisms with regard to the av-
y = 2{(D − d)/2 − 3 (T /4)}{D/2 − T /4 − Db } erage location of hawks can boost the exploratory behavior
( ) of HHO in initial iterations.
x
φo = 2Π − cos−1 • Different LF-based patterns with short-length jumps en-
y hance the exploitative behaviors of HHO when conducting
Db ri ro a local search.
γ = , fi = , fo = , T = D − d − 2Db • The progressive selection scheme assists search agents to
Dm Db Db progressively improve their position and only select a bet-
D = 160, d = 90, ter position, which can improve the quality of solutions
and intensification powers of HHO during the course of
Bw = 30, ri = ro = 11.033 0.5(D + d) ≤ Dm ≤ 0.6(D + d), iterations.
• HHO utilizes a series of searching strategies based on E and r
0.15(D − d) ≤ Db ≤ 0.45(D − d), 4 ≤ Z ≤ 50, 0.515 ≤ fi
parameters and then, it selects the best movement step. This
and fo ≤ 0.6, capability has also a constructive impact on the exploitation
potential of HHO.
0.4 ≤ KD min ≤ 0.5, • The randomized jump J strength can assist candidate solu-
tions in balancing the exploration and exploitation tenden-
0.6 ≤ KD max ≤ 0.7, 0.3 ≤ e ≤ 0.4, 0.02 ≤ e ≤ 0.1, cies.
0.6 ≤ ζ ≤ 0.85 • The use of adaptive and time-varying parameters allows
HHO to handle difficulties of a search space including local
A schematic view of this problem is illustrated in Fig. 16. optimal solutions, multi-modality, and deceptive optima.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 867

Table 15
Comparison of results for rolling element bearing design problem.
Algorithms GA4 [95] TLBO [93] PVS [94] HHO
Dm 125.717100 125.7191 125.719060 125.000000
Db 21.423000 21.42559 21.425590 21.000000
Z 11.000000 11.000000 11.000000 11.092073
fi 0.515000 0.515000 0.515000 0.515000
f0 0.515000 0.515000 0.515000 0.515000
Kdmin 0.415900 0.424266 0.400430 0.400000
Kdmax 0.651000 0.633948 0.680160 0.600000
ϵ 0.300043 0.300000 0.300000 0.300000
e 0.022300 0.068858 0.079990 0.050474
ξ 0.751000 0.799498 0.700000 0.600000
Maximum cost 81843.30 81859.74 81859.741210 83011.88329

Table 16
Description of unimodal benchmark functions.
Function Dimensions Range fmin
∑n
f1 (x) = i=1 x2i 30,100, 500, 1000 [100, 100] 0
∑n ∏n
f2 (x) = i=1 |xi |+ i=1 | xi | 30,100, 500, 1000 [10, 10] 0
∑n (∑ )2
i
f3 (x) = i=1 j−1 xj 30,100, 500, 1000 [100, 100] 0

f4 (x) = maxi {|xi |, 1 ≤ i ≤ n} 30,100, 500, 1000 [100, 100] 0


∑n−1 [ 2
]
+ (xi − 1) 2
[30, 30]
( )
f5 (x) = i=1 100 xi+1 − x2i 30,100, 500, 1000 0
∑n
f6 (x) = i=1 ([xi + 0.5]) 2
30,100, 500, 1000 [100, 100] 0
∑n
i=1 ixi + random[0, 1) [128, 128]
4
f7 (x) = 30,100, 500, 1000 0

Table 17
Description of multimodal benchmark functions.
Function Dimensions Range fmin
∑n (√ )
f8 (x) = i=1 −xi sin |xi | 30,100, 500, 1000 [500, 500] −418.9829 × n
∑n [ 2
i=1 xi − 10 cos (2π xi ) + 10 [5.12, 5.12]
]
f9 (x) = 30,100, 500, 1000 0
√ ∑
n
f10 (x) = −20 exp(−0.2 1n 2
i=1 xi ) − 30,100, 500, 1000 [32, 32] 0
( 1 ∑n
i=1 cos (2π xi ) + 20 + e
)
exp n
∑n 2 ∏n ( )
x
1
f11 (x) = 4000 i=1 xi − i=1 cos
√i
i
+1 30,100, 500, 1000 [600, 600] 0

f12{
(x) = } 30,100, 500, 1000 [50, 50] 0
π
∑n−1
10 sin (π y1 ) + 1 + 10 sin2 (π yi+1 ) + (yn − 1)2
2
[ ]
n i=1 (yi − 1)
∑n
+ i=1 u(xi , 10, 100, 4) ⎧
⎨k(xi − a)
⎪ m
xi > a
xi +1
yi = 1 + 4
u(xi , a, k, m) = 0 − a < xi < a
⎩k(−x − a)m x < −a

i i

f13 (x)
{ = ∑n 30,100, 500, 1000 [50, 50] 0
0.1 sin2 (3π x1 ) + i=1 (xi − 1)2 1 + sin2 (3π xi + 1)
[ ]
∑n
+(xn − 1) 1 + sin2 (2π xn ) u(xi , 5, 100, 4)
2
[ ]}
+ i=1

6. Conclusion and future directions well-regarded optimizers. Additionally, the results of six con-
strained engineering design tasks also revealed that the HHO can
show superior results compared to other optimizers.
In this work, a novel population-based optimization algorithm
We designed the HHO as simple as possible with few ex-
called HHO is proposed to tackle different optimization tasks.
ploratory and exploitative mechanisms. It is possible to uti-
The proposed HHO is inspired by the cooperative behaviors and
lize other evolutionary schemes such as mutation and crossover
chasing styles of predatory birds, Harris’ hawks, in nature. Sev-
schemes, multi-swarm and multi-leader structure, evolutionary
eral equations are designed to simulate the social intelligence updating structures, and chaos-based phases. Such operators and
of Harris’ hawks to solve optimization problems. Twenty nine ideas are beneficial for future works.
unconstrained benchmark problems were used to evaluate the In future works, the binary and multi-objective versions of
performance of HHO. Exploitative, exploratory, and local optima HHO can be developed. In addition, it can be employed to tackle
avoidance of HHO was investigated using unimodal, multi-modal various problems in engineering and other fields. Another in-
and composition problems. The results obtained show that HHO teresting direction is to compare different constraint handling
was capable of finding excellent solutions compared to other strategies in dealing with real-world constrained problems.
868 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

Table 18
Description of fixed-dimension multimodal benchmark functions.
Function Dimensions Range fmin
( )−1
∑25
f14 (x) = 1
+ 1
2 [−65, 65] 1
500 j=1 j+∑2
(
i=1 xi −aij
6
)
[ ( ) ]2
∑11 x1 b2i +bi x2
f15 (x) = i=1 ai − 4 [−5, 5] 0.00030
b2i +bi x3 +x4

f16 (x) = 4x21 − 2.1x41 + 1 6


+ x1 x2 − 4x22 + 4x42
x
3 1
2 [−5, 5] −1.0316
( )2
5.1
f17 (x) = x2 − + 10 1 − 81π cos x1 + 10
5
[−5, 5]
( )
4π 2
x21 + x −6
π 1
2 0.398

f18 (x) = 1 + (x1 + x2 + 1)2 19 − 14x1 + 3x21 − 14x2 + 6x1 x2 + 3x22 [−2, 2]
[ ( )]
2 3
× 30 + (2x1 − 3x2 ) × 18 − 32x1 + 12x21 + 48x2 − 36x1 x2 + 27x22
2
[ ( )]

∑4 ( ∑ )2 )
3
[1, 3]
(
f19 (x) = − i=1 ci exp − j=1 aij xj − pij 3 −3.86
∑4 ( ∑ )2 )
6
[0, 1]
(
f20 (x) = − i=1 ci exp − j=1 aij xj − pij 6 −3.32
∑5 [ ]−1
f21 (x) = − i=1 (X − ai ) (X − ai )T + ci 4 [0, 10] −10.1532
∑7 [ ]−1
f22 (x) = − i=1 (X − ai ) (X − ai ) + ci T
4 [0, 10] −10.4028
∑10 [ ]−1
f23 (x) = − i=1 (X − ai ) (X − ai ) + ci T
4 [0.10] −10.5363

Table 19
Details of hybrid composition functions F24–F29 (MM: Multi-modal, R: Rotated, NS: Non-Separable, S: Scalable, D: Dimension).
ID (CEC5-ID) Description Properties D Range
F24 (C16) Rotated Hybrid Composition Function MM, R, NS, S 30 [−5, 5]D
F25 (C18) Rotated Hybrid Composition Function MM, R, NS, S 30 [−5, 5]D
F26 (C19) Rotated Hybrid Composition Function with narrow basin global optimum MM, NS, S 30 [−5, 5]D
F27 (C20) Rotated Hybrid Composition Function with Global Optimum on the Bounds MM, NS, S 30 [−5, 5]D
F28 (C21) Rotated Hybrid Composition Function MM, R, NS, S 30 [−5, 5]D
F29 (C25) Rotated Hybrid Composition Function without bounds MM, NS, S 30 [−5, 5]D

Table 20
P-values of the Wilcoxon rank-sum test with 5% significance for F1–F13 with 30 dimensions (p-values ≥ 0.05 are shown in bold face, NaN means ‘‘Not a Number’’
returned by the test).
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 2.85E−11 2.88E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F2 2.72E−11 2.52E−11 4.56E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F3 2.71E−11 2.63E−11 2.79E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F4 2.62E−11 2.84E−11 2.62E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F5 2.62E−11 2.52E−11 2.72E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F6 2.72E−11 2.71E−11 2.62E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 2.25E−04 3.02E−11
F7 2.52E−11 2.71E−11 9.19E−11 3.02E−11 3.69E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F8 7.83E−09 2.71E−11 7.62E−09 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F9 9.49E−13 1.00E−12 NaN 1.21E−12 4.35E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 4.57E−12 1.21E−12
F10 1.01E−12 1.14E−12 1.05E−12 1.21E−12 1.16E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 4.46E−13 1.21E−12
F11 9.53E−13 9.57E−13 9.54E−13 1.21E−12 2.79E−03 1.21E−12 1.21E−12 1.21E−12 1.21E−12 NaN 1.21E−12
F12 2.63E−11 2.51E−11 2.63E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 1.01E−08 3.02E−11 1.07E−06 3.02E−11
F13 2.51E−11 2.72E−11 2.61E−11 3.02E−11 3.02E−11 3.02E−11 5.49E−11 3.02E−11 3.02E−11 2.00E−06 3.02E−11

Table 21
p-values of the Wilcoxon rank-sum test with 5% significance for F1–F13 with 100 dimensions (p-values ≥ 0.05 are shown in bold face).
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 2.98E−11 2.52E−11 2.52E−11 3.01E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F2 2.88E−11 2.72E−11 2.72E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F3 2.72E−11 2.72E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F4 2.40E−11 2.52E−11 2.51E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.01E−11 3.02E−11
F5 2.72E−11 2.62E−11 2.84E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F6 2.52E−11 2.52E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F7 2.71E−11 2.79E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 4.20E−10 3.02E−11
F8 2.72E−11 2.51E−11 2.83E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 5.57E−10 3.02E−11 3.02E−11 3.02E−11
F9 1.06E−12 9.57E−13 9.54E−13 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 3.34E−01 1.21E−12
F10 9.56E−13 9.57E−13 1.09E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 4.16E−14 1.21E−12
F11 1.06E−12 9.55E−13 9.56E−13 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 NaN 1.21E−12
F12 2.72E−11 2.52E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F13 2.72E−11 2.72E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 869

Table 22
p-values of the Wilcoxon rank-sum test with 5% significance for F1–F13 with 500 dimensions (p-values ≥ 0.05 are shown in bold face).
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 2.94E−11 2.79E−11 2.72E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F2 2.52E−11 2.63E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F3 2.88E−11 2.52E−11 2.72E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F4 2.25E−11 2.52E−11 2.59E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F5 2.72E−11 2.72E−11 2.72E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F6 2.52E−11 2.52E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F7 2.52E−11 2.79E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 4.98E−11 3.02E−11
F8 2.52E−11 2.72E−11 2.63E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F9 1.06E−12 1.06E−12 1.06E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 NaN 1.21E−12
F10 9.57E−13 9.57E−13 1.06E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 6.14E−14 1.21E−12
F11 9.57E−13 9.57E−13 1.06E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 NaN 1.21E−12
F12 2.52E−11 2.52E−11 2.79E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F13 2.79E−11 2.52E−11 2.72E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11

Table 23
p-values of the Wilcoxon rank-sum test with 5% significance for F1–F13 with 1000 dimensions (p-values ≥ 0.05 are shown in bold face).
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 3.01E−11 2.52E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F2 2.63E−11 1.21E−12 2.72E−11 3.02E−11 3.02E−11 1.21E−12 1.21E−12 3.02E−11 1.21E−12 1.21E−12 1.21E−12
F3 2.86E−11 2.52E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F4 1.93E−11 2.52E−11 2.07E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F5 2.72E−11 2.52E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F6 2.63E−11 2.63E−11 2.63E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F7 2.63E−11 2.52E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F8 2.52E−11 2.52E−11 2.52E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F9 1.01E−12 1.06E−12 9.57E−13 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 NaN 1.21E−12
F10 1.01E−12 1.01E−12 9.57E−13 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 8.72E−14 1.21E−12
F11 1.06E−12 1.01E−12 9.57E−13 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.17E−13 1.21E−12
F12 2.52E−11 2.52E−11 2.72E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11
F13 2.52E−11 2.63E−11 2.72E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11 3.02E−11

Table 24
p-values of the Wilcoxon rank-sum test with 5% significance for F14–F29 problems(p-values ≥ 0.05 are shown in bold face).
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F14 8.15E−02 2.89E−08 8.15E−03 1.08E−01 5.20E−08 7.46E−12 1.53E−09 6.13E−14 9.42E−06 8.15E−02 1.00E+00
F15 2.78E−11 7.37E−11 2.51E−11 9.76E−10 1.37E−01 3.34E−11 3.16E−10 8.69E−10 5.00E−10 5.08E−06 3.92E−02
F16 1.05E−12 9.53E−13 9.49E−13 NaN NaN 5.54E−03 NaN NaN NaN NaN NaN
F17 1.87E−12 1.89E−12 2.06E−12 1.61E−01 1.61E−01 5.97E−01 1.61E−01 1.61E−01 1.61E−01 1.61E−01 1.61E−01
F18 NaN 9.53E−13 NaN NaN 1.09E−02 1.34E−03 NaN NaN NaN NaN NaN
F19 2.50E−11 5.24E−02 1.91E−09 1.65E−11 1.06E−01 5.02E−10 1.65E−11 1.65E−11 4.54E−10 1.65E−11 1.65E−11
F20 8.74E−03 2.54E−04 8.15E−03 6.15E−03 5.74E−06 5.09E−06 1.73E−07 NaN 1.73E−04 1.73E−04 1.73E−04
F21 1.22E−04 6.25E−05 5.54E−03 1.91E−08 5.54E−03 6.85E−07 1.71E−07 1.91E−08 9.42E−06 1.73E−04 1.79E−04
F22 1.64E−07 5.00E−10 8.15E−08 2.51E−11 8.15E−08 6.63E−07 5.24E−04 1.73E−08 8.15E−08 8.81E−10 1.21E−12
F23 1.54E−05 5.00E−10 8.88E−08 2.51E−11 8.88E−08 1.73E−08 5.14E−04 1.69E−08 8.88E−08 8.81E−10 NaN
F24 2.40E−01 4.69E−08 1.64E−05 1.17E−05 2.84E−04 3.02E−11 3.03E−03 3.08E−08 8.89E−10 8.35E−08 3.20E−09
F25 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12
F26 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12
F27 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12 1.21E−12
F28 0.012732 1.17E−09 5.07E−10 0.001114 1.01E−08 3.02E−11 2.37E−10 2.02E−08 8.35E−08 0.446419 2.71E−11
F29 1.85E−08 6.52E−09 3.02E−11 1.29E−06 7.12E−09 3.02E−11 1.17E−09 3.02E−11 3.02E−11 2.6E−08 3.02E−11

Acknowledgments References

This research is funded by Zhejiang Provincial Natural Science [1] R. Abbassi, A. Abbassi, A.A. Heidari, S. Mirjalili, An efficient salp swarm-
Foundation of China (LY17F020012), Science and Technology Plan inspired algorithm for parameters identification of photovoltaic cell
Project of Wenzhou of China (ZG2017019). models, Energy Convers. Manage. 179 (2019) 362–372.
We also acknowledge the comments of anonymous reviewers. [2] H. Faris, A.M. Al-Zoubi, A.A. Heidari, I. Aljarah, M. Mafarja, M.A. Hassonah,
H. Fujita, An intelligent system for spam detection and identification of the
most relevant features based on evolutionary random weight networks, Inf.
Appendix A Fusion 48 (2019) 67–83.
[3] J. Nocedal, S.J. Wright, Numerical Optimization, 2nd ed., 2006.
See Tables 16–19
[4] G. Wu, Across neighborhood search for numerical optimization, Inform.
Sci. 329 (2016) 597–618.
Appendix B [5] G. Wu, W. Pedrycz, P.N. Suganthan, R. Mallipeddi, A variable reduction
strategy for evolutionary algorithms handling equality constraints, Appl.
See Tables 20–24 Soft Comput. 37 (2015) 774–786.
870 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

[6] J. Dr éo, A.P. étrowski, P. Siarry, E. Taillard, Metaheuristics for Hard [37] J.C. Bednarz, Cooperative hunting in harris’ hawks (parabuteo unicinctus),
Optimization: Methods and Case Studies, Springer Science & Business Science 239 (1988) 1525.
Media, 2006. [38] S. DeBruyne, D. Kaur, Harris’s Hawk multi-objective optimizer for reference
[7] E.-G. Talbi, Metaheuristics: from Design to Implementation, vol. 74, John point problems, proceedings on the International Conference on Artificial
Wiley & Sons, 2009. Intelligence (ICAI), in: The Steering Committee of The World Congress
[8] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated in Computer Science, Computer Engineering and Applied Computing
annealing, Science 220 (1983) 671–680. (WorldComp), 2016, pp. 287–292.
[9] J.H. Holland, Genetic algorithms, Sci. Am. 267 (1992) 66–73. [39] L. Lefebvre, P. Whittle, E. Lascaris, A. Finkelstein, Feeding innovations and
[10] J. Luo, H. Chen, Y. Xu, H. Huang, X. Zhao, et al., An improved grasshopper forebrain size in birds, Anim. Behav. 53 (1997) 549–560.
optimization algorithm with application to financial stress prediction, Appl. [40] D. Sol, R.P. Duncan, T.M. Blackburn, P. Cassey, L. Lefebvre, Big brains,
Math. Model. 64 (2018) 654–668. enhanced cognition, and response of birds to novel environments, Proc.
[11] M. Wang, H. Chen, B. Yang, X. Zhao, L. Hu, Z. Cai, H. Huang, C. Tong, Natl. Acad. Sci. USA 102 (2005) 5460–5465.
Toward an optimal kernel extreme learning machine using a chaotic [41] F. Dubois, L.-A. Giraldeau, I.M. Hamilton, J.W. Grant, L. Lefebvre, Distraction
moth-flame optimization strategy with applications in medical diagnoses, sneakers decrease the expected level of aggression within groups: a
Neurocomputing 267 (2017) 69–84. game-theoretic model, Am. Nat. 164 (2004) E32–E45.
[12] L. Shen, H. Chen, Z. Yu, W. Kang, B. Zhang, H. Li, B. Yang, D. Liu, Evolving [42] A.A.S. EurekAlertA, Bird iq test takes flight, 2005.
support vector machines using fruit fly optimization for medical data [43] N.E. Humphries, N. Queiroz, J.R. Dyer, N.G. Pade, M.K. Musyl, K.M. Schaefer,
classification, Knowl.-Based Syst. 96 (2016) 61–75. D.W. Fuller, J.M. Brunnschweiler, T.K. Doyle, J.D. Houghton, et al., Environ-
[13] Q. Zhang, H. Chen, J. Luo, Y. Xu, C. Wu, C. Li, Chaos enhanced bacterial mental context explains Lévy and brownian movement patterns of marine
foraging optimization for global optimization, IEEE Access (2018). predators, Nature 465 (2010) 1066–1069.
[14] A.A. Heidari, R.A. Abbaspour, A.R. Jordehi, An efficient chaotic water cycle [44] G.M. Viswanathan, V. Afanasyev, S. Buldyrev, E. Murphy, P. Prince, H.E.
algorithm for optimization tasks, Neural Comput. Appl. 28 (2017) 57–85. Stanley, Lévy flight search patterns of wandering albatrosses, Nature 381
[15] M. Mafarja, I. Aljarah, A.A. Heidari, A.I. Hammouri, H. Faris, M. A.-Z. Ala’, S. (1996) 413.
Mirjalili, Evolutionary population dynamics and grasshopper optimization [45] D.W. Sims, E.J. Southall, N.E. Humphries, G.C. Hays, C.J. Bradshaw, J.W.
approaches for feature selection problems, Knowl.-Based Syst. 145 (2018a) Pitchford, A. James, M.Z. Ahmed, A.S. Brierley, M.A. Hindell, et al., Scaling
25–45. laws of marine predator search behaviour, Nature 451 (2008) 1098–1102.
[16] M. Mafarja, I. Aljarah, A.A. Heidari, H. Faris, P. Fournier-Viger, X. Li, [46] A.O. Gautestad, I. Mysterud, Complex animal distribution and abundance
S. Mirjalili, Binary dragonfly optimization for feature selection using from memory-dependent kinetics, Ecol. Complex. 3 (2006) 44–55.
time-varying transfer functions, Knowl.-Based Syst. 161 (2018b) 185–204. [47] M.F. Shlesinger, Levy flights: variations on a theme, Physica D 38 (1989)
[17] I. Aljarah, M. Mafarja, A.A. Heidari, H. Faris, Y. Zhang, S. Mirjalili, Asyn- 304–309.
chronous accelerating multi-leader salp chains for feature selection, Appl.
[48] G. Viswanathan, V. Afanasyev, S.V. Buldyrev, S. Havlin, M. Da Luz, E.
Soft Comput. 71 (2018) 964–979.
Raposo, H.E. Stanley, Lévy flights in random searches, Physica A 282 (2000)
[18] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Softw. 1–12.
95 (2016) 51–67.
[49] X.-S. Yang, Nature-inspired Metaheuristic Algorithms, Luniver press, 2010.
[19] H. Faris, M.M. Mafarja, A.A. Heidari, I. Aljarah, M. A.-Z. Ala’, S. Mirjalili, H.
[50] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Trans.
Fujita, An efficient binary salp swarm algorithm with crossover scheme for
Evol. Comput. 3 (1999) 82–102.
feature selection problems, Knowl.-Based Syst. 154 (2018) 43–67.
[51] J.G. Digalakis, K.G. Margaritis, On benchmarking functions for genetic
[20] J.R. Koza, Genetic Programming II, Automatic Discovery of Reusable
algorithms, Int. J. Comput. Math. 77 (2001) 481–506.
Subprograms, MIT Press, Cambridge, MA, 1992.
[52] S. García, D. Molina, M. Lozano, F. Herrera, A study on the use of non-
[21] R. Storn, K. Price, Differential evolution–a simple and efficient heuristic
parametric tests for analyzing the evolutionary algorithms’ behaviour: a
for global optimization over continuous spaces, J. Glob. Optim. 11 (1997)
case study on the cec’2005 special session on real parameter optimization,
341–359.
J. Heuristics 15 (2009) 617.
[22] D. Simon, Biogeography-based optimization, IEEE Trans. Evol. Comput. 12
[53] X.-S. Yang, A. Hossein Gandomi, Bat algorithm: a novel approach for global
(2008) 702–713.
engineering optimization, Eng. Comput. 29 (2012) 464–483.
[23] O.K. Erol, I. Eksin, A new optimization method: big bang–big crunch, Adv.
[54] X.-S. Yang, M. Karamanoglu, X. He, Flower pollination algorithm: a
Eng. Softw. 37 (2006) 106–111.
novel approach for multiobjective optimization, Eng. Optim. 46 (2014)
[24] R.A. Formato, Central force optimization, Prog. Electromagn. Res. 77 (2007)
1222–1237.
425–491.
[55] A.H. Gandomi, X.-S. Yang, A.H. Alavi, Mixed variable structural optimization
[25] E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, Gsa: a gravitational search
using firefly algorithm, Comput. Struct. 89 (2011) 2325–2336.
algorithm, Inf. Sci. 179 (2009) 2232–2248.
[26] S. Salcedo-Sanz, Modern meta-heuristics based on nonlinear physics pro- [56] S. Mirjalili, S.M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Softw.
cesses: a review of models and design procedures, Phys. Rep. 655 (2016) 69 (2014) 46–61.
1–70. [57] S. Mirjalili, Moth-flame optimization algorithm: a novel nature-inspired
[27] F. Glover, Tabu search—part i, ORSA J. Comput. 1 (1989) 190–206. heuristic paradigm, Knowl.-Based Syst. 89 (2015) 228–249.
[28] M. Kumar, A.J. Kulkarni, S.C. Satapathy, Socio evolution & learning op- [58] J. Derrac, S. Garc ía, D. Molina, F. Herrera, A practical tutorial on the
timization algorithm: a socio-inspired optimization methodology, Future use of nonparametric statistical tests as a methodology for comparing
Gener. Comput. Syst. 81 (2018) 252–272. evolutionary and swarm intelligence algorithms, Swarm Evol. Comput. 1
[29] R.V. Rao, V.J. Savsani, D. Vakharia, Teaching–learning-based optimization: (2011) 3–18.
an optimization method for continuous non-linear large scale problems, [59] X.-S. Yang, Firefly algorithm, stochastic test functions and design
Inform. Sci. 183 (2012) 1–15. optimisation, Int. J. Bio-Inspir. Com. 2 (2010) 78–84.
[30] A. Baykaso ˘ g lu, F.B. Ozsoydan, Evolutionary and population-based [60] F. Van Den Bergh, A.P. Engelbrecht, A study of particle swarm optimization
methods versus constructive search strategies in dynamic combinatorial particle trajectories, Inf. Sci. 176 (2006) 937–971.
optimization, Inform. Sci. 420 (2017) 159–183. [61] S. Mirjalili, A.H. Gandomi, S.Z. Mirjalili, S. Saremi, H. Faris, S.M. Mirjalili,
[31] A.A. Heidari, H. Faris, I. Aljarah, S. Mirjalili, An efficient hybrid multilayer Salp swarm algorithm: a bio-inspired optimizer for engineering design
perceptron neural network with grasshopper optimization, Soft Comput. problems, Adv. Eng. Softw. (2017).
(2018) 1–18. [62] H. Eskandar, A. Sadollah, A. Bahreininejad, M. Hamdi, Water cycle
[32] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, algorithm–a novel metaheuristic optimization method for solving con-
in: Micro Machine and Human Science, MHS’95 Proceedings of the Sixth strained engineering optimization problems, Comput. Struct. 110 (2012)
International Symposium on, IEEE, 1995, pp. 39–43. 151–166.
[33] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony [63] S. Saremi, S. Mirjalili, A. Lewis, Grasshopper optimisation algorithm: theory
of cooperating agents, IEEE Trans. Syst. Man Cybern. B 26 (1996) 29–41. and application, Adv. Eng. Softw. 105 (2017) 30–47.
[34] A.H. Gandomi, X.-S. Yang, A.H. Alavi, Cuckoo search algorithm: a meta- [64] M. Zhang, W. Luo, X. Wang, Differential evolution with dynamic stochastic
heuristic approach to solve structural optimization problems, Eng. Comput. selection for constrained optimization, Inform. Sci. 178 (2008) 3043–3074.
29 (2013) 17–35. [65] S. Mirjalili, S.M. Mirjalili, A. Hatamlou, Multi-verse optimizer: a nature-
[35] A.A. Heidari, I. Aljarah, H. Faris, H. Chen, J. Luo, S. Mirjalili, An en- inspired algorithm for global optimization, Neural Comput. Appl. 27 (2016)
hanced associative learning-based exploratory whale optimizer for global 495–513.
optimization, Neural Comput. Appl. (2019). [66] H. Liu, Z. Cai, Y. Wang, Hybridizing particle swarm optimization with dif-
[36] D.H. Wolpert, W.G. Macready, No free lunch theorems for optimization, ferential evolution for constrained numerical and engineering optimization,
IEEE Trans. Evol. Comput. 1 (1997) 67–82. Appl. Soft Comput. 10 (2010) 629–640.
A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872 871

[67] A. Sadollah, A. Bahreininejad, H. Eskandar, M. Hamdi, Mine blast algorithm: Ali Asghar Heidari is now a Ph.D. research intern at
a new population based algorithm for solving constrained engineering Department of Computer Science, School of Comput-
optimization problems, Appl. Soft Comput. 13 (2013) 2592–2612. ing, National University of Singapore (NUS), Singapore.
[68] J.-F. Tsai, Global optimization of nonlinear fractional programming Currently, he is also an exceptionally talented Ph.D.
problems in engineering design, Eng. Optim. 37 (2005) 399–409. candidate at the University of Tehran and he is
[69] T. Ray, P. Saini, Engineering design optimization using a swarm with an awarded and funded by Iran’s National Elites Founda-
intelligent information sharing among individuals, Eng. Optim. 33 (2001) tion (INEF). His main research interests are advanced
735–748. machine learning, evolutionary computation, meta-
[70] A. Kaveh, A. Dadras, A novel meta-heuristic optimization algorithm: heuristics, prediction, information systems, and spatial
thermal exchange optimization, Adv. Eng. Softw. 110 (2017) 69–84. modeling. He has published more than 14 papers in in-
[71] H. Salimi, Stochastic fractal search: a powerful metaheuristic algorithm, ternational journals such as Information Fusion, Energy
Knowl.-Based Syst. 75 (2015) 1–18. Conversion and Management, Applied Soft Computing, and Knowledge-Based
[72] J.S. Arora, Introduction To Optimum Design, 1989, McGraw-Mill Book Systems.
Company, 1967.
[73] K. Deb, Optimal design of a welded beam via genetic algorithms, AIAA J.
29 (1991) 2013–2015.
[74] C.A.C. Coello, E.M. Montes, Constraint-handling in genetic algorithms
through the use of dominance-based tournament selection, Adv. Eng. Seyedali Mirjalili is a lecturer at Griffith University
Inform. 16 (2002) 193–203. and internationally recognized for his advances in
[75] A.D. Belegundu, J.S. Arora, A study of mathematical programming methods Swarm Intelligence (SI) and optimization, including the
for structural optimization. part i: theory, Internat. J. Numer. Methods first set of SI techniques from a synthetic intelligence
Engrg. 21 (1985) 1583–1599. standpoint – a radical departure from how natural
[76] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization systems are typically understood – and a systematic
for constrained engineering design problems, Eng. Appl. Artif. Intell. 20 design framework to reliably benchmark, evaluate, and
(2007) 89–99. propose computationally cheap robust optimization al-
[77] L. Wang, L.-p. Li, An effective differential evolution with level comparison gorithms. Dr Mirjalili has published over 80 journal
for constrained engineering design, Struct. Multidiscip. Optim. 41 (2010) articles, many in high-impact journals with over 7000
947–963. citations in total with an H-index of 29 and G-index of
[78] Y. Wang, Z. Cai, Y. Zhou, Z. Fan, Constrained optimization based on hybrid 84. From Google Scholar metrics, he is globally the 3rd most cited researcher in
evolutionary algorithm and adaptive constraint-handling technique, Struct. Engineering Optimization and Robust Optimization. He is serving an associate
Multidiscip. Optim. 37 (2009) 395–413. editor of Advances in Engineering Software and the journal of Algorithms.
[79] A. Kaveh, T. Bakhshpoori, Water evaporation optimization: a novel
physically inspired optimization algorithm, Comput. Struct. 167 (2016)
69–85.
[80] A.H. Gandomi, X.-S. Yang, A.H. Alavi, S. Talatahari, Bat algorithm
Hossam Faris is an Associate professor at Business
for constrained optimization tasks, Neural Comput. Appl. 22 (2013)
Information Technology department/King Abdullah II
1239–1255.
School for Information Technology/ The University of
[81] E. Mezura-Montes, C.A.C. Coello, A simple multimembered evolution strat-
Jordan (Jordan). Hossam Faris received his BA, M.Sc.
egy to solve constrained optimization problems, IEEE Trans. Evol. Comput.
degrees (with excellent rates) in Computer Science
9 (2005) 1–17.
from Yarmouk University and Al-Balqa‘ Applied Uni-
[82] W. Gong, Z. Cai, D. Liang, Engineering optimization by means of an versity in 2004 and 2008 respectively in Jordan. Since
improved constrained differential evolution, Comput. Methods Appl. Mech. then, he has been awarded a full-time competition-
Engrg. 268 (2014) 884–904. based PhD scholarship from the Italian Ministry of
[83] C.A.C. Coello, Use of a self-adaptive penalty approach for engineering Education and Research to peruse his PhD degrees in
optimization problems, Comput. Ind. 41 (2000) 113–127. e-Business at University of Salento, Italy, where he
[84] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility- obtained his PhD degree in 2011. In 2016, he worked as a Postdoctoral re-
based rule for constrained optimization, Appl. Math. Comput. 186 (2007) searcher with GeNeurateam at the Information and Communication Technologies
1407–1422. Research Center (CITIC), University of Granada (Spain). Since 2017, he is leading
[85] L. dos Santos Coelho, Gaussian quantum-behaved particle swarm optimiza- with Dr. Ibrahim Aljarah the research group of Evolutionary Algorithms and
tion approaches for constrained engineering design problems, Expert Syst. Machine Learning (Evo-ml). His research interests include: Applied Computa-
Appl. 37 (2010) 1676–1683. tional Intelligence, Evolutionary Computation, Knowledge Systems, Data mining,
[86] H. Rosenbrock, An automatic method for finding the greatest or least value Semantic Web and Ontologies.
of a function, Comput. J. 3 (1960) 175–184.
[87] A. Kaveh, S. Talatahari, A novel heuristic optimization method: charged
system search, Acta Mech. 213 (2010) 267–289.
[88] M. Montemurro, A. Vincenti, P. Vannucci, The automatic dynamic penal-
isation method (adp) for handling constraints with genetic algorithms, Ibrahim Aljarah is an associate professor of BIG
Comput. Methods Appl. Mech. Engrg. 256 (2013) 70–87. Data Mining and Computational Intelligence at the
[89] E. Mezura-Montes, C. Coello Coello, J. Vel ázquez Reyes, L. Mu ñoz-D á University of Jordan — Department of Information
vila, Multiple trial vectors in differential evolution for engineering design, Technology, Jordan. He obtained his bachelor degree in
Eng. Optim. 39 (2007) 567–589. Computer Science from Yarmouk University — Jordan,
[90] K. Ragsdell, D. Phillips, Optimal design of a class of welded structures using 2003. Dr. Aljarah also obtained his master degree in
geometric programming, J. Eng. Ind. 98 (1976) 1021–1025. computer science and information systems from the
[91] K.S. Lee, Z.W. Geem, A new structural optimization method based on the Jordan University of Science and Technology — Jordan
harmony search algorithm, Comput. Struct. 82 (2004) 781–798. in 2006. He also obtained his Ph.D. In computer Science
[92] F.-z. Huang, L. Wang, Q. He, An effective co-evolutionary differential from the North Dakota State University (NDSU), USA, in
evolution for constrained optimization, Appl. Math. Comput. 186 (2007) May 2014. Since 2017, he is leading with Dr. Hossam
340–356. Faris the research group of Evolutionary Algorithms and Machine Learning
(Evo-ml). He organized and participated in many conferences in the field of
[93] R.V. Rao, V.J. Savsani, D. Vakharia, Teaching–learning-based optimization:
data mining, machine learning, and Big data such as NTIT, CSIT, IEEE NABIC,
a novel method for constrained mechanical design optimization problems,
CASON, and BIGDATA Congress. Furthermore, he contributed in many projects
Comput. Aided Des. 43 (2011) 303–315.
in USA such as Vehicle Class Detection System (VCDS), Pavement Analysis
[94] P. Savsani, V. Savsani, Passing vehicle search (pvs): a novel metaheuristic
Via Vehicle Electronic Telemetry (PAVVET), and Farm Cloud Storage System
algorithm, Appl. Math. Model. 40 (2016) 3951–3978.
(CSS) projects. He has published more than 50 papers in refereed international
[95] S. Gupta, R. Tiwari, S.B. Nair, Multi-objective design optimisation of
conferences and journals. His research focuses on data mining, Machine Learning,
rolling bearings using genetic algorithms, Mech. Mach. Theory 42 (2007)
Big Data, MapReduce, Hadoop, Swarm intelligence, Evolutionary Computation,
1418–1443.
Social Network Analysis (SNA), and large scale distributed algorithms.
872 A.A. Heidari, S. Mirjalili, H. Faris et al. / Future Generation Computer Systems 97 (2019) 849–872

Majdi Mafarja received his B.Sc in Software Engi- Huiling Chen is currently an associate professor in
neering and M.Sc in Computer Information Systems the department of computer science at Wenzhou Uni-
from Philadelphia University and The Arab Academy versity, China. He received his Ph.D. degree in the
for Banking and Financial Sciences, Jordan in 2005 and department of computer science and technology at Jilin
2007 respectively. Dr. Mafarja did his PhD in Computer University, China. His present research interests center
Science at National University of Malaysia (UKM). He on evolutionary computation, machine learning, and
was a member in Datamining and Optimization Re- data mining, as well as their applications to medical
search Group (DMO). Now he is an assistant professor diagnosis and bankruptcy prediction. He has published
at the Department of Computer Science at Birzeit more than 100 papers in international journals and
University. His research interests include Evolutionary conference proceedings, including Pattern Recognition,
Computation, Meta-heuristics and Data mining. Expert Systems with Applications, Knowledge-Based
Systems, Soft Computing, Neurocomputing, Applied Mathematical Modeling, IEEE
ACCESS, PAKDD, and among others.

You might also like