0% found this document useful (0 votes)
17 views19 pages

10.1007@s00500 020 04834 7

The document presents a novel quasi-reflected Harris hawks optimization algorithm (QRHHO) aimed at enhancing the performance of the existing Harris hawks optimization (HHO) algorithm by integrating a quasi-reflection-based learning mechanism. The QRHHO is tested against various benchmark functions and demonstrates improved convergence speed and solution accuracy compared to basic HHO and its variants. This research contributes to the field of meta-heuristic algorithms by providing a more effective approach for solving global optimization problems.

Uploaded by

aparnasnair283
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views19 pages

10.1007@s00500 020 04834 7

The document presents a novel quasi-reflected Harris hawks optimization algorithm (QRHHO) aimed at enhancing the performance of the existing Harris hawks optimization (HHO) algorithm by integrating a quasi-reflection-based learning mechanism. The QRHHO is tested against various benchmark functions and demonstrates improved convergence speed and solution accuracy compared to basic HHO and its variants. This research contributes to the field of meta-heuristic algorithms by providing a more effective approach for solving global optimization problems.

Uploaded by

aparnasnair283
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Soft Computing

https://doi.org/10.1007/s00500-020-04834-7 (0123456789().,-volV)(0123456789().
,- volV)

METHODOLOGIES AND APPLICATION

A novel quasi-reflected Harris hawks optimization algorithm for global


optimization problems
Qian Fan1 • Zhenjian Chen1 • Zhanghua Xia1

Ó Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract
Harris hawks optimization (HHO) is a recently developed meta-heuristic optimization algorithm based on hunting behavior
of Harris hawks. Similar to other meta-heuristic algorithms, HHO tends to be trapped in low diversity, local optima and
unbalanced exploitation ability. In order to improve the performance of HHO, a novel quasi-reflected Harris hawks
algorithm (QRHHO) is proposed, which combines HHO algorithm and quasi-reflection-based learning mechanism (QRBL)
together. The improvement includes two parts: the QRBL mechanism is introduced firstly to increase the population
diversity in the initial stage, and then, QRBL is added in each population position update to improve the convergence rate.
The proposed method will also be helpful to control the balance between exploration and exploitation. The performance of
QRHHO has been tested on twenty-three benchmark functions of various types and dimensions. Through comparison with
the basic HHO, HHO combined with opposition-based learning mechanism and HHO combined with quasi-opposition-
based learning mechanism, the results demonstrate that QRHHO can effectively improve the convergence speed and
solution accuracy of the basic HHO and two variants of HHO. At the same time, QRHHO is also better than other swarm-
based intelligent algorithms.

Keywords Harris hawks optimization  Quasi-reflection-based learning  Opposition-based learning  Benchmark


functions  Swarm-based intelligent algorithms

1 Introduction (Storn and Price 1997), simulated annealing (SA) (Kirk-


patrick et al. 1983), particle swarm optimization algorithm
In recent years, meta-heuristic algorithms have attracted (PSO) (Kennedy and Eberhart 2002), artificial bee colony
extensive attention in various fields. Compared with the algorithm (ABC)(Karaboga and Basturk 2007), Krill Herd
traditional optimization algorithms, the meta-heuristic (KH) (Gandomi and Alavi 2012), gravitational search
algorithms are simple in principle and easy to implement. algorithm (GSA) (Rashedi et al. 2009), fruit fly optimiza-
Also, the algorithms do not need gradient information and tion algorithm (FOA) (Pan 2012), ant lion optimizer
have the advantages of bypassing local optimal and thus (ALO)(Mirjalili 2015a), moth-flame optimization (MFO)
have been widely used to solve the optimization problems (Mirjalili 2015b), grey wolf optimizer (GWO) (Mirjalili
in various disciplines or engineering applications. et al. 2014), sine cosine algorithm (SCA) (Mirjalili 2016),
The meta-heuristic algorithms mainly include opti- grasshopper optimization algorithm (GOA) (Saremi et al.
mization algorithms based on evolution, physics, human 2017), salp swarm algorithm (SSA) (Mirjalili et al. 2017),
and swarm (Mirjalili and Lewis 2016), such as genetic Henry gas solubility optimization algorithm (HGSO)
algorithm (GA) (Holland 1992), differential evolution (DE) (Hashim et al. 2019; Yildiz et al. 2020) and so on.
Harris hawks optimization algorithm (HHO) (Heidari
et al. 2019) is a recently proposed population-based meta-
Communicated by V. Loia. heuristic algorithm inspired by the hunting behavior of
Harris hawks, which consists of two main steps: searching
& Qian Fan
[email protected] for prey and attacking prey. Compared with some of the
most advanced meta-heuristic algorithms, HHO has
1
College of Civil Engineering, Fuzhou University, already been proved to be more effective on some
Fuzhou 350116, China

123
Q. Fan et al.

benchmark functions. Currently, it has been applied to modifying HHO with a long-term memory concept. The
several disciplines design and engineering optimization comparison results showed that LHHO could maintain
problems. For example, Qais et al. (2020) applied HHO to exploration up to a certain level and achieve better results
extract the unknown parameters of the three-diode photo- than HHO. Besides, Yıldız et al. (2019) combined HHO
voltaic model. The results revealed that the proposed with the Nelder–Mead local search algorithm and used the
method was efficient and could easily identify the electrical new algorithm to solve a milling manufacturing opti-
parameters of any commercial photovoltaic panel based on mization problem.
the datasheet values only. Abbasi et al. (et al. 2019) con- Although the above-mentioned variants of HHO are
firmed that HHO had a superior performance in minimizing better optimized for specific problems, they are not suit-
the entropy generation of the microchannel. Shehabeldeen able for most optimization problems. Therefore, it is
et al. (2019) found that HHO could efficiently search for meaningful to develop more efficient algorithm. This paper
optimal values of the adaptive neuro-fuzzy inference sys- puts forward a new quasi-reflected HHO algorithm
tem parameters and determine the optimal operating con- (QRHHO) that combines HHO and the quasi-reflection-
ditions of the friction stir welding process. Houssein et al. based learning (QRBL) mechanism to further improve the
(2020b) employed HHO to determine the sink node loca- basic HHO. QRBL is a variant of opposition-based learn-
tion in a large-scale wireless sensor network, and the ing (OBL), which is an effective intelligent optimization
method ultimately prolonged the lifetime of the network in technique. It has been utilized by meta-heuristic algorithms
an efficacious way. Yıldız et al. applied HHO to solve the such as biogeography-based optimization (BBO) (Ergezer
structural optimization problem of the vehicle component et al. 2009), ions motion optimization algorithm (IMO)
in industry (Yildiz and Yildiz 2019) and used HHO to (Das et al. 2016) and symbiotic organisms search algorithm
select optimal machining parameters in manufacturing (SOS) (Das et al. 2017). The algorithms that integrated the
operations (Yildiz et al. 2019). Moreover, Houssein et al. QRBL mechanism show better convergence speed than the
(2020a) hybridized HHO with the support vector machine original algorithms and can also avoid local optimization.
for the chemical descriptor selection and chemical com- To verify the effectiveness of the proposed QRHHO,
pound activities, from which a superior accuracy was twenty-three typical benchmark functions of various types
obtained compared with well-known algorithms. and dimensions are used for simulation experiments. Its
However, as a new swarm intelligence algorithm, HHO performance is then compared with the basic HHO algo-
is still in its infancy. Like other swarm intelligent opti- rithm and the other two improved HHO algorithms. The
mization algorithms, it is more difficult for HHO to achieve experimental results show that our proposed QRHHO is a
a balance between exploration and exploitation due to the novel powerful search algorithm with good convergence
randomness of the optimization process. Therefore, the speed and global optimization ability for various opti-
algorithm may result in low accuracy and slow conver- mization problems.
gence, as well as fall into local optimum. To overcome the The rest of the paper is organized as follows. In Sect. 2,
limitations of basic HHO, some scholars have made some the Harris hawks optimization algorithm is introduced.
improvements in different ways. For example, Jia et al. Section 3 presents the proposed QRHHO algorithm. In
(2019) introduced dynamic control parameter strategy and Sect. 4, the simulation experiments are implemented and
mutation mechanism to HHO to enhance the search effi- the results are analyzed in detail. Finally, the conclusions
ciency. Kamboj et al. (2020) proposed a hybrid HHO and and future work are discussed in Sect. 5.
SCA optimization algorithm, which was much better than
standard HHO, SCA and other optimization algorithm.
Yousri et al. (2020) presented a novel MHHO by modi- 2 The basic Harris hawks optimization
fying exploration phase and prey energy equation and algorithm
applied the method to optimize photovoltaic array recon-
figuration for alleviating the partial shading influence. Too Harris hawks optimization algorithm is a new population-
et al. (2019) proposed two binary versions of HHO, namely based, nature-inspired optimization algorithm that mimics
BHHO and QBHHO, to tackle the feature selection prob- Harris hawks’ behavior of searching and attacking prey. It
lem in classification tasks, which could achieve the highest mainly includes the exploratory and exploitative phases.
classification accuracy compared with other algorithms. With respect to HHO, the prey energy is proposed to
Kurtulus et al. (2020) hybridized HHO and simulated represent the escape of prey. Harris hawks adopt different
annealing to accelerating its global convergence perfor- strategies to catch prey according to the change of this
mance and firstly used the hybridized algorithm to optimize energy, which is modeled as:
the design parameters for highway guardrail systems.
Hussain et al. (2019) developed LHHO algorithm through

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

 t 2.2 Exploitation phase


E ¼ 2E0 1  ð1Þ
T
E0 ¼ 2r1  1 ð2Þ In the exploitation phase, four approaches have been made
to model the attacking stage of hawks. A random number
where E represents the escaping energy of the prey, t is the r is proposed to represent the chance of a prey in suc-
current iteration number and T is the maximum number of cessfully escaping (r \ 0.5) or unsuccessfully escaping
the allowed iterations. E0 illustrates the initial state of its (r C 0.5) before surprise pounce. Whatever the prey does,
energy, which randomly varies in the interval of (- 1, 1) the hawks will perform a hard or soft besiege to catch the
during each iteration, while r1 is a random number in (0, 1). prey according to the prey energy E. When |E| C 0.5, the
The variable E has a decreasing trend during the itera- soft besiege is conducted, while the hard besiege is
tions, which means that the energy of the prey gradually implemented if |E| \ 0.5.
decreases during escaping. When the escaping energy
|E| C 1, the hawks search different regions to explore the 2.2.1 Soft besiege
location of the prey, and the algorithm enters the explo-
ration phase. When |E| \ 1, the hawks attack the prey When r C 0.5 and |E| C 0.5, the prey still has enough
found in the previous phase, and the algorithm enters the energy to escape but finally fails. The Harris’ hawks
exploitation phase. The details of each phase are intro- encircle the rabbit softly and then perform the surprise
duced in the following subsections. pounce. This behavior is modeled as follows:
Xðt þ 1Þ ¼ DX  EjJXrabbit ðtÞ  XðtÞj ð5Þ
2.1 Exploration phase
DXðtÞ ¼ Xrabbit ðtÞ  XðtÞ ð6Þ
In HHO, Harris hawks are the candidate solutions and the
J ¼ 2  ð1  r6 Þ ð7Þ
best candidate solution in each step is considered as the
intended prey or close to the optimal solution. In the where DX(t) indicates the difference between the position
exploration phase, Harris hawks randomly perch on tall of the rabbit and the current location in iteration t, || rep-
trees or perch based on the positions of the other family resents the absolute value, r6 is a random number inside (0,
members and the rabbit to search for the prey. To model 1), and J indicates the random jump strength of the rabbit
these two mechanisms, a probability of 50% is assumed to during the escaping procedure.
choose between them during the optimization. The math-
ematical model is as follows: 2.2.2 Hard besiege
Xðt þ 1Þ
 If r C 0.5 and |E| \ 0.5, the prey has not enough energy to
Xrand ðtÞ  r2 jXrand ðtÞ  2r3 XðtÞj q  0:5
¼ escape and the Harris’ hawks hardly encircle the intended
ðXrabbit ðtÞ  Xm ðtÞÞ  r4 ðLB + r5 ðUB  LBÞÞ q\0:5 prey to finally perform the surprise pounce. The behavior is
ð3Þ defined as following:
where X(t ? 1) is the position of hawks in the next itera- Xðt þ 1Þ ¼ Xrabbit ðtÞ  EjDXðtÞj ð8Þ
tion, Xrand(t) indicates a random hawk chosen from the
current population and X(t) represents the current position
2.2.3 Soft besiege with progressive rapid dives
of hawks, Xrabbit(t) is the position of rabbit. r2, r3, r4, r5 and
q illustrate the random numbers in (0, 1), which are
When |E| C 0.5 but r \ 0.5, the prey has enough energy to
updated in each iteration, UB and LB describe the upper
escape successfully, but the hawks will make several rapid
and lower bounds of variables, while X m(t) is the average
dives around the prey and gradually correct their positions
position of the current population of hawks, which is cal-
and directions according to the deceptive action of the
culated as:
prey, so as to choose the best time to dive into the prey. To
1X N implement the soft besiege, the hawks decide the next
Xm ðtÞ ¼ Xi ðtÞ ð4Þ move according to Eq. (9):
N i¼1
Y ¼ Xrabbit ðtÞ  EjJXrabbit ðtÞ  XðtÞj ð9Þ
where Xi (t) represents the location of each hawk in itera-
tion t and N is the total number of hawks.

123
Q. Fan et al.

If the hawks see their prey running away, they make Algorithm 1: Harris’ hawks optimization algorithm
more deceptive movements and also begin to make irreg- Generate random population of hawks Xi (i =1,2,…,n)
ular, sudden and rapid dives as they approach rabbits. To Calculate the fitness values of each hawk
is the location of rabbit (best solution)
model this escaping behavior of the prey, the levy flight is
introduced during optimization. The rule is as follows while (t<maximum number of iterations)
for each hawk (Xi)
Z ¼ Y þ S  LFðDÞ ð10Þ Update the initial energy E0 by the Eq. (2)
Update the prey energy E by the Eq. (1)
where D is the dimension of problem, S indicates a random
Update the jump strength J by the Eq. (7)
vector by size 1 9 D and LF is the levy flight function,
if (|E|≥1)
which is calculated as:
Update the position of the current solution by the Eq. (3)
ur end if
LFðxÞ ¼ 0:01  1 ð11Þ
jvjb if (|E|< 1)
  1b1 if (r≥0.5 and |E|≥0.5 )
0
Cð1 þ bÞ  sin pb2
Update the position of the current solution by the Eq. (5)
r¼@   b1
A ð12Þ else if (r≥0.5 and |E|< 0.5 )
C 2 b2 2 Þ
1þb ð Update the position of the current solution by the Eq. (8)
else if (r < 0.5 and |E|≥0.5 )
where u, v are random numbers in (0, 1), b represents a Update the position of the current solution by the Eq. (13)
default constant set to 1.5 here. Hence, the mathematical else if (r < 0.5 and |E|< 0.5 )
model can be described as: Update the position of the current solution by the Eq. (14)
 end if
Y if FðYÞ\FðXðtÞÞ
Xðt þ 1Þ ¼ ð13Þ end for
Z if FðZÞ\FðXðtÞÞ Check if any solution goes beyond the search space and amend it
Calculate the fitness of each hawk
where Y and Z are obtained using Eqs. (9) and (10). During
If there is a better solution, update Xrabbit
each step, HHO only selects the better position Y or Z as
t=t+1
the next position. This strategy is applied to all search
end while
agents.
Return Xrabbit

2.2.4 Hard besiege with progressive rapid dives


3 Quasi-reflected Harris hawks optimization
If |E| \ 0.5 and r \ 0.5, the prey has a low escaping energy algorithm
and the hawks construct a hard besiege before the surprise
pounces to catch and kill the prey. Meanwhile, the hawks 3.1 Opposition-based learning
try to decrease the distance of their average position to the
escaping prey. In this situation, the current positions are Tizhoosh (2005) firstly proposed the opposition-based
updated using Eq. (14): learning (OBL). Its main idea is to generate the opposite

Y if FðYÞ\FðXðtÞÞ solution of the feasible solution, evaluate the opposite
Xðt þ 1Þ ¼ ð14Þ solution and select the better candidate solution. In general,
Z if FðZÞ\FðXðtÞÞ
opposite numbers are more likely to be closer to the opti-
Y ¼ Xrabbit ðtÞ  EjJXrabbit ðtÞ  Xm ðtÞj ð15Þ mal solution than random ones. In recent years, OBL has
Z ¼ Y þ S  LFðDÞ ð16Þ been successfully applied in GA (da Silveira et al. 2016),
DE (Rahnamayan et al. 2008), SCA (Abd Elaziz et al.
where X m(t) is obtained using Eq. (4).
2017), GOA (Ewees et al. 2018a), WOA (Abd Elaziz and
The pseudocode of the HHO algorithm is shown in
Oliva 2018), ALO (Dinkar and Deep 2018), and other
Algorithm 1.
intelligent optimization algorithms, which could improve
solution precision and convergence speed of the
algorithms.
Let x be a real number in one-dimensional space. Its
opposite number xo is then defined by

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

 
xo ¼ lb þ ub  x ð17Þ lb þ ub
xqr ¼ rand ;x ð21Þ
2
where x 2 R and x 2 ½lb; ub.This definition can be gener-  
alized to d dimensions by using the following equation: where rand lbþ2ub ; x is a random number uniformly
xoi ¼ lbi þ ubi  xi ð18Þ distributed between lbþub and x.
2
To make the above definitions more clear, Fig. 1 illus-
where xi 2 R and xi 2 ½lbi ; ubi 8i 2 1; 2; . . .; d.
trates a point x, its opposite point xo , its quasi-opposite
point xqo and its quasi-reflected point xqr .
3.2 Quasi-opposition-based learning The quasi-reflected number can also be extended to d-
dimensional space as follows:
A variant of OBL called quasi-opposition-based learning  
(QOBL) has been proposed by (Rahnamayan et al. 2007). qr lbi þ ubi
xi ¼ rand ; xi ð22Þ
Previous research have already proved that using quasi- 2
opposite numbers was more effective than opposite num-
bers in finding the global optimal solution (Guha et al.
3.4 The proposed quasi-reflected Harris hawks
2016; Sharma et al. 2016; Shiva and Mukherjee 2015;
optimization (QRHHO) algorithm
Truong et al. 2019). On the basis of the opposite number,
the quasi-opposite number xqo of x is define by
  Similar to other swarm intelligence algorithms, HHO may
qo lb þ ub o be trapped in local optimum and slow convergence.
x ¼ rand ;x ð19Þ
2 Therefore, the QRHHO algorithm is proposed to enhance
the performance of HHO, which is a combination of QRBL
where lbþ2ub represents the center of the interval ½lb; ub
  and HHO. The proposed method consists of two stages:
and rand lbþ2ub ; xo is a random number uniformly dis- Firstly, the QRBL mechanism is applied to population
tributed between lbþ2ub and xo . initialization to improve the quality and diversity of the
initial population. Secondly, the QRBL strategy is added to
Similarly, the quasi-opposite number can also be
each population location update to improve the conver-
extended to d-dimensional space by using the following
gence rate.
equation:
  In the initial phase, the QRHHO algorithm generates a
lbi þ ubi o  
xqo ¼ rand ; x ð20Þ random population P0 ¼ Xij ; i ¼ 1; 2; . . .; N; j ¼
i i
2 1; 2; . . .; D. (N is the population size. D is the dimension of
the problem.) Then, QRBL is used to calculate the quasi-
3.3 Quasi-reflection-based learning reflective solution of each solution in the population, so as
n o
to obtain a quasi-reflective population Pqr qr
0 ¼ Xij ;
Based on OBL and QOBL, a new quasi-reflection-based i ¼ 1; 2; . . .; N; j ¼ 1; 2; . . .; D. The fitness values of two
learning mechanism (QRBL) was proposed in the literature populations are calculated and compared, among which the
(Ewees et al. 2018b). The quasi-reflected number xqr is best N individuals in the two populations will be selected as
obtained by reflecting the quasi-opposite number xqo , the initial population. The pseudocode is shown as follows:
which is defined by

Fig. 1 Opposite points defined in domain [lb, ub]. xo is the opposite point of x, xqo and xqr represent the quasi-opposite and quasi-reflected points,
respectively

123
Q. Fan et al.

QRBL initializes the population Step 1 Initialize parameters including population size N,
Generate random population of hawks P0 = X ij { } , i = 1, 2,..., N ; j = 1, 2,..., D maximum iteration number T, search dimension D, upper
bound UB and lower bound LB of search space;
Generate quasi-reflected population P0 = X ij
qr
{ qr } , i = 1, 2,..., N ; j = 1, 2,..., D Step 2 Generate randomly a population P0 in search
for i = 1: N space and use the QRBL method to obtain its quasi-re-
for j = 1: D flected population Pqr 0 . Then, select the optimal N individ-
Mj = Mj ( +Mj)/2 uals from the two populations as the initial population;
r is a random number in [0,1]
Step 3 Calculate the fitness value of each individual in
the current population and find the location with the best
if X i, j < M j
fitness value as the optimal location Xrabbit;
qr
X i, j = X i, j + ( M j − X i, j ) × r Step 4 Update the prey energy E. According to the value
of E, select one search strategy from Eqs. (2), (5), (8), (13)
else
and (14) to update the individual location of the current
qr
X i, j = M j + ( X i, j − M j ) × r
population;
end Step 5 Generate the quasi-reflected population Pqr of the
end current population P by using the QRBL strategy again and
end
Calculate and compare the fitness values of the two populations
then choose the best N individuals from the combination of
P and Pqr as the initial population for the next iteration;
Select the best N individuals from P0
qr
P0 as initial population
Step 6 Output the optimal individual if the number of
contemporary iterations reaches T. Otherwise, return to
step 3.
During the updating phase, hawks positions are updated
by using the standard HHO and a new population P is 3.5 Computational complexity of QRHHO
obtained. Then, the QRBL method is used again to gen-
erate the quasi-reflected population Pqr . According to the The computational complexity of HHO mainly consists of
fitness values of P and Pqr , QRHHO selects the best N in- three parts: initialization, fitness evaluation and population
dividuals from the two populations as the next initial updating, which can be calculated as follows:
population. Repeat the previous steps until the maximum
number of iterations is reached. The pseudocode is shown OðHHOÞ ¼ OðNÞþOðT  NÞþOðT  N  DÞ ð23Þ
as below: The complexity of HHO will increase with the appli-
cation of QRBL method. In each iteration, QRBL performs
QRBL updates the current population operations on N individuals to produce N quasi-reflection
individuals and the algorithm calculates the fitness value of
Using HHO to generate population P = X ij { } , i = 1, 2,..., N ; j = 1, 2,..., D
the 2 N individuals. After ranking 2 N fitness values, the
Generate quasi-reflected population P
qr
{ qr }
= X ij , i = 1, 2,..., N ; j = 1, 2,..., D best N individuals will be selected to continue updating.
for i = 1: N The complexity of the process is OðN  Dþ2Nþ2Nlg2NÞ.
for j = 1: D Since the QRBL method is added during the initial phases
M j = (M j +Mj)/2
and position updating phase, the whole computational
r is a random number in [0,1]
complexity of QRHHO is
if X <M
i, j j
OðQRHHOÞ ¼ OðN þðN Dþ2Nþ2Nlg2NÞÞþOðT NÞ
X
qr
i, j
(
= X + M j − X i, j × r
i, j ) þ OðT ðN DþðN Dþ2Nþ2Nlg2NÞÞÞ
else ¼ OðN ð3þDþ2lg2NþT ð3þ2Dþ2lg2NÞÞÞ
X
qr
i, j
( )
= M j + X i, j − M j × r
ð24Þ
end
end
According to the above analysis, it is obvious that the
end computational complexity of QRHHO is higher than that of
Calculate and compare the fitness values of the two populations the basic HHO.
qr
Select the best N individuals from P P as next initial population

QRHHO follows the basic steps of HHO. Its algorithm


flow is summarized as follows:

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

4 Experiments and analysis 4.2 Comparison with basic HHO and two
variants of HHO
4.1 Benchmark test functions and parameter
settings To evaluate the advantage of the QRBL mechanism on the
benchmark test problems, we compare QRHHO with the
In this section, twenty-three benchmark functions (Xin basic HHO and two variants of HHO. The two variants are
et al. 1999) are selected to verify the effectiveness of the HHO with opposition-based learning (OHHO) and HHO
proposed QRHHO. These benchmark test functions are with quasi-opposition-based learning (QOHHO).
divided into three groups according to their characteristics: The computational results of 30-dimensional unimodal
unimodal, multimodal and fixed-dimension multimodal and multimodal functions are recorded in Tables 4 and 5,
functions. The unimodal functions (F1*F7) have only one respectively, while those for the 100-dimensional unimodal
optimal solution and are often used to test the exploitation and multimodal functions are shown in Tables 6 and 7.
capability of the algorithm. The second kind of test func- Finally, the results of the fixed-dimension multimodal
tions (F8*F13) is multimodal function, which is charac- functions are reported in Table 8. All the obtained results
terized by multiple optimal values. Since it is not easy to are presented in terms of the average value (AVE), the
find the global optimal solution for these functions, it can standard deviation (STD) and the best value (BEST),
be used to test the ability of the exploration and jumping among which better AVE shows that the performance of
out of local optimal. The last type is the fixed-dimension the algorithm is superior to the others, while smaller STD
multimodal function (F14*F23), which also has a large indicates that the algorithm is more stable. BEST repre-
number of optimal solutions. However, due to the low sents the best result found by this algorithm in the 30
dimension, it is easy to find the optimization, and thus, it simulation experiments.
can be used to test the stability of the algorithm. The
definition and description of each function are given in 4.2.1 Analysis of statistical results for unimodal benchmark
Tables 1, 2 and 3, where D represents the dimension of the functions
function, range is the boundary of the function’s search
space and fmin indicates the optimum. Typical 3D shapes of The unimodal function can be used to analyze the
some selected benchmark test functions are shown in exploitation ability of the optimization algorithm, since
Fig. 2. there is only one global optimal solution and no other local
Before starting the simulation experiment, the maximum optimal solution exists. When the dimension D is set to 30,
number of iterations T and the size of population N are set it can be clearly seen from Table 4 that the QRHHO
to 500 and 30, respectively. Then, each optimization algorithm is superior to the other algorithms in most test
algorithm runs 30 times on each benchmark function functions, especially for functions F1 and F3, of which the
independently. theoretical optimal values are accurately obtained using
QRHHO, while the other three algorithms cannot find the

Table 1 Description of
Function D Range fmin
unimodal benchmark functions
P
n 30/100 [- 100, 100]D 0
F1 ð xÞ ¼ x2i
i¼1
Pn Q
n 30/100 [- 10, 10]D 0
F2 ð xÞ ¼ jxi j  jxi j
i¼1 i¼1
!2
P
n P
i 30/100 [- 100, 100]D 0
F3 ð xÞ ¼ xj
i¼1 j¼1

F4 ð xÞ ¼ maxi fjxi j; 1  i  ng 30/100 [- 100, 100]D 0


P
n1
2 30/100 [- 30, 30]D 0
F5 ð xÞ ¼ ½100 xiþ1  x2i þðxi  1Þ2 ]
i¼1
P
n 30/100 [- 100, 100]D 0
F6 ð xÞ ¼ ðjxi þ 0:5jÞ2
i¼1
Pn 30/100 [- 1.28, 1.28]D 0
F7 ð xÞ ¼ ix4i þ random½0; 1
i¼1

123
Q. Fan et al.

Table 2 Description of multimodal benchmark functions


Function D Range fmin

P
n pffiffiffiffiffiffi 30/100 [- 500, 500]D - 418.9829 9 D
F8 ð xÞ ¼ xi sin j xi j
i¼1
Pn 30/100 [- 5.12, 5.12]n 0
F9 ð xÞ ¼ x2i  10 cosð2pxi Þ þ 10
i¼1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi!
1 Xn 2 30/100 [- 32, 32]D 0
F10 ð xÞ ¼ 20 exp 0:2 i¼1 i
x
n
 X 
1 n
 exp i¼1
cos ð 2px i Þ þ 20 þ e
n
P
n Q
n 30/100 [- 600, 600]D 0
1
F11 ð xÞ ¼ 4000 x2i  cosðpxiffiiÞ þ 1
i¼1 i¼1
pn Xn1 o
30/100 [- 50, 50]D 0
F12 ð xÞ ¼ 10sin2 ðpyi Þ þ i¼1
ðyi  1Þ2 1 þ 10 sin2 ðpyiþ1 Þ þ ðyn  1Þ2
n X
n
þ i¼1
uðxi ; 10; 100; 4Þ
8 m
< kðxi  aÞ ; xi [ a
xi þ1
yi ¼ 1 þ 4 ; uðxi ; a; k; mÞ ¼ 0; a\xi \a
:
kðxi  aÞm ; xi \  a

F13 ð xÞ ¼ 0:1 sin2 ð3pxi Þ 30/100 [- 50, 50]D 0
Xn1
þ i¼1
ðxi  1Þ2 1 þ sin2 ð3pxiþ1 Þ

þ ðxn  1Þ 1 þ sin2 ð2pxn Þ
Xn
þ i¼1
uðxi ; 5; 100; 4Þ

Table 3 Description of fixed-


Function D Range fmin
dimension multimodal
benchmark functions !1 D
P
25 2 [- 65, 65] 1
1 Pn 1
F14 ð xÞ ¼ 500 þ 6
j¼1 jþ i¼1
ðxi aij Þ
 
P
11 x1 ðb2i þbi x2 Þ 4 [- 5, 5]D 0.00030
F15 ð xÞ ¼ ai  b2i þbi x3 þx4
i¼1

F16 ð xÞ ¼ 4x21  2:1x41 þ 13 x61 þ x1 x2  4x22 þ 4x42 2 [- 5, 5]D - 1.0316


2 D
5:1 2
F17 ð xÞ ¼ x2  4p 5
2 x1 þ p x1  6
1
þ10 1  8p cos x1 þ 10 2 [- 5, 5] 0.398
h D
2 2
F18 ð xÞ ¼ 1 þ ðx1 þ x2 þ 1Þ 19  14x1 þ 3x1  14x2 2 [- 2, 2] 3
h
þ 6x1 x2 þ 3x22  30 þ ð2x1  3x2 Þ2 ð18
 32x1 þ 12x21 þ 48x2  36x1 x2 þ 27x22
!
P4 P3
2
3 [1, 3]D - 3.86
F19 ð xÞ ¼  ci exp  aij xj  pij
i¼1 j¼1
!
P
4 P
6
2
6 [0, 1]D - 3.32
F20 ð xÞ ¼  ci exp  aij xj  pij
i¼1 j¼1

P
5 1 4 [0, 10]D - 10.1532
F21 ð xÞ ¼  ðX  ai ÞðX  ai ÞT þci
i¼1
P
7 1 4 [0, 10]D - 10.4028
F22 ð xÞ ¼  ðX  ai ÞðX  ai ÞT þci
i¼1
P
10 1 4 [0, 10]D - 10.5363
F23 ð xÞ ¼  ðX  ai ÞðX  ai ÞT þci
i¼1

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

Fig. 2 Typical 3D representations of some benchmark functions

Table 4 Results of
Function Index HHO OHHO QOHHO QRHHO
30-dimensional unimodal
benchmark functions (F1–F7) F1 AVE 4.59E-96 3.12E-320 0 0
STD 2.37E-95 0 0 0
BEST 2.06E-112 0 0 0
F2 AVE 4.22E-49 3.28E-175 1.91E-156 3.55E2268
STD 2.31E-48 0 1.04E-155 0
BEST 5.29E-57 1.23E-220 1.74E-186 2.80E2286
F3 AVE 5.04E-73 1.10E-163 5.44E-190 0
STD 2.76E-72 0 0 0
BEST 3.27E-98 0 3.84E-242 0
F4 AVE 6.34E-50 2.77E-148 1.33E-177 2.07E2244
STD 1.77E-49 1.52E-147 0 0
BEST 1.31E-56 6.56E-197 5.11E-190 5.42E2266
F5 AVE 1.15E-02 1.02E202 1.10E-01 1.16E-01
STD 2.36E-02 1.44E202 1.22E-01 1.06E-01
BEST 2.50E-05 2.45E207 4.44E-03 1.04E-02
F6 AVE 1.17E-04 9.46E205 1.51E-03 1.25E-03
STD 1.56E-04 1.07E204 6.67E-04 5.39E-04
BEST 4.29E209 1.16E-08 2.59E-04 2.64E-04
F7 AVE 1.91E-04 1.55E-04 1.00E-04 8.04E205
STD 2.22E-04 2.05E-04 8.75E-05 5.36E205
BEST 3.10E206 1.05E-05 6.53E-06 4.19E-06
The best values obtained are in bold

123
Q. Fan et al.

Table 5 Results of
Function Index HHO OHHO QOHHO QRHHO
30-dimensional multimodal
benchmark functions (F8–F13) F8 AVE - 1.26E104 - 1.26E104 - 1.21E104 - 1.26E104
STD 7.22E201 7.60E-01 1.16E?03 6.17E?02
BEST - 1.26E104 - 1.26E104 - 1.26E104 - 1.26E104
F9 AVE 0 0 0 0
STD 0 0 0 0
BEST 0 0 0 0
F10 AVE 8.88E-16 8.88E-16 8.88E-16 8.88E216
STD 0 0 0 0
BEST 8.88E-16 8.88E-16 8.88E-16 8.88E216
F11 AVE 0 0 0 0
STD 0 0 0 0
BEST 0 0 0 0
F12 AVE 1.07E-05 5.62E206 1.39E-04 9.37E-05
STD 1.41E-05 7.45E206 5.99E-05 4.81E-05
BEST 6.16E209 1.99E-07 6.06E-05 2.61E-05
F13 AVE 1.30E-04 6.53E205 2.91E-03 1.42E-03
STD 2.44E-04 7.06E205 5.95E-03 5.11E-04
BEST 1.37E207 2.02E-07 6.17E-05 6.28E-04
The best values obtained are in bold

Table 6 Results of
Function Index HHO OHHO QOHHO QRHHO
100-dimensional unimodal
benchmark functions (F1–F7) F1 AVE 4.14E-94 1.73E-298 2.07E-307 0
STD 2.22E-93 0 0 0
BEST 4.63E-110 0 0 0
F2 AVE 1.17E-49 1.90E-165 6.65E-123 5.03E2253
STD 5.21E-49 0.00E?00 3.46E-122 0
BEST 1.82E-57 5.80E-203 2.96E-155 2.00E2272
F3 AVE 1.91E-62 6.90E-115 6.14E-141 0
STD 1.04E-61 3.25E-114 3.37E-140 0
BEST 7.41E-87 4.18E-273 2.11E-185 0
F4 AVE 1.31E-46 2.85E-148 7.01E-148 2.04E2233
STD 7.04E-46 1.56E-147 3.18E-147 0
BEST 3.27E-57 2.01E-184 2.78E-165 3.98E2252
F5 AVE 4.11E202 7.78E-02 9.00E-01 8.05E-01
STD 5.77E202 1.81E-01 1.01E?00 5.72E-01
BEST 9.61E207 4.65E-05 3.10E-02 1.02E-01
F6 AVE 5.48E-04 4.57E204 1.16E-02 1.57E-02
STD 6.61E204 7.45E-04 5.29E-03 6.44E-03
BEST 1.79E-07 1.30E209 3.28E-03 2.86E-03
F7 AVE 1.37E-04 1.64E-04 9.67E-05 8.86E205
STD 1.36E-04 1.46E-04 1.02E-04 8.12E205
BEST 3.46E-06 1.12E-05 3.15E206 7.16E-06
The best values obtained are in bold

optimal solution. For functions F2, F4 and F7, the average algorithms. Compared with HHO, OHHO also achieves a
values of QRHHO decrease obviously, while for functions certain degree of improvement by applying the common
F5 and F6, OHHO obtains better solutions than the other OBL mechanism.

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

Table 7 Results of
Function Index HHO OHHO QOHHO QRHHO
100-dimensional multimodal
benchmark functions (F8–F13) F8 AVE - 4.19E104 - 4.19E?04 - 4.07E?04 - 4.15E?04
STD 4.39E101 4.48E?01 2.11E?03 1.47E?02
BEST - 4.19E?04 - 4.19E?04 - 4.19E?04 2 4.19E104
F9 AVE 0 0 0 0
STD 0 0 0 0
BEST 0 0 0 0
F10 AVE 8.88E-16 8.88E-16 8.88E-16 8.88E216
STD 0 0 0 0
BEST 8.88E-16 8.88E-16 8.88E-16 8.88E216
F11 AVE 0 0 0 0
STD 0 0 0 0
BEST 0 0 0 0
F12 AVE 4.44E-06 3.19E206 1.58E-04 1.25E-04
STD 5.07E-06 4.18E206 5.32E-05 6.00E-05
BEST 3.02E-09 1.31E208 5.65E-05 3.30E-05
F13 AVE 1.26E-04 9.51E205 4.07E-03 5.69E-03
STD 2.26E-04 8.87E205 2.61E-03 2.13E-03
BEST 2.15E-07 6.45E210 6.46E-04 1.31E-04
The best values obtained are in bold

When the search space dimension is increased to 100, 4.2.3 Analysis of statistical results for fixed-dimension
we can notice from Table 6 that QRHHO still achieves multimodal functions
better solutions than the other algorithms for functions F1–
F4 and F7. The HHO outperforms the other algorithms in Functions F14–F23 are multimodal functions with fixed
function F5, whereas, for function F6, HHO obtains the small dimensions, which can be used to test the stability
best STD, and OHHO gets the best AVE and BEST. and exploration ability of the algorithm. As can be seen
Hence, QRHHO still maintains the best search perfor- from Table 8, for all fixed-dimension multimodal func-
mance for most high-dimensional unimodal functions. This tions, the performance of QRHHO surpasses the basic
fully proves the superiority of the exploitation ability of HHO in an all-round way, and the solution results are very
QRHHO. close to the theoretical value on most functions. In par-
ticular, HHO cannot find theoretical values for functions
4.2.2 Analysis of statistical results for multimodal F21–F23, while QRHHO can find them easily. Compared
benchmark functions with OHHO and QOHHO, AVEs of QRHHO are in the
lead overall, and the STDs of QRHHO keep the minimum
Compared with the unimodal test function, the multimodal except for functions F18, F20 and F22. Therefore, it can be
test function has many optimal solutions, among which one concluded that QRHHO always remains the high stability
is global and the rest are local. These multimodal test and exploration ability on this type of benchmark
problems are generally used to evaluate the exploration functions.
ability of search algorithm. As shown in Table 5 and
Table 7, for function F8, the results of four algorithms are 4.2.4 Convergence analysis
very close. For functions F9–F11, each algorithm could
obtain the theoretical optimal solution. With respect to the The convergence curves of some functions are selected and
100-dimensional function F12 and F13, OHHO achieves shown in Figs. 3, 4 and 5. We notice that QRHHO algo-
better solutions than the other algorithms. rithm starts to converge from the previous generations in
For multimodal benchmark functions, however, the all unimodal functions (Figs. 3, 4). In particular, only
performance improvement in QRHHO is not very obvious. QRHHO could obtain the global optimal of functions F1
This is mainly due to the fact that the basic HHO has and F3. For the multimodal function F9–F11, the conver-
already provided very competitive results for these gence speed of QRHHO from the initial generation is much
functions. faster than that of HHO, and also slightly higher than that

123
Q. Fan et al.

Table 8 Results of the fixed-


Function Index HHO OHHO QOHHO QRHHO
dimension multimodal
benchmark functions (F14–F23) F14 AVE 1.3943 1.4948 1.1305 1.0643
STD 0.9563 0.6253 0.3437 0.2522
BEST 0.9980 0.9980 0.9980 0.9980
F15 AVE 3.95E-04 4.08E-04 3.13E-04 3.12E204
STD 2.62E-04 3.05E-04 1.02E-05 4.79E206
BEST 3.12E-04 3.08E-04 3.07E204 3.08E-04
F16 AVE - 1.0316 - 1.0316 - 1.0316 2 1.0316
STD 2.98E-10 1.82E-08 7.58E-10 6.53E213
BEST - 1.0316 - 1.0316 - 1.0316 2 1.0316
F17 AVE 0.398 0.398 0.398 0.398
STD 3.25E-05 2.14E-06 1.88E-06 1.29E206
BEST 0.398 0.398 0.398 0.398
F18 AVE 3 3 3 3
STD 1.95E-06 1.95E-07 2.88E212 6.09E-07
BEST 3 3 3 3
F19 AVE - 3.86 - 3.86 - 3.86 2 3.86
STD 2.72E-03 9.20E-04 1.71E-04 3.93E208
BEST - 3.86 - 3.86 - 3.86 2 3.86
F20 AVE - 3.11 - 3.13 2 3.29 - 3.27
STD 0.1012 0.1190 6.56E202 7.71E-02
BEST - 3.27 - 3.31 - 3.32 2 3.32
F21 AVE - 5.2187 - 10.1525 - 9.9830 2 10.1532
STD 0.9263 8.92E-04 0.9307 7.18E206
BEST - 10.1230 - 10.1532 - 10.1532 2 10.1532
F22 AVE - 5.2568 2 10.4023 - 10.0483 - 10.2258
STD 0.9469 1.03E203 1.3485 0.9704
BEST - 10.2703 - 10.4029 - 10.4029 2 10.4029
F23 AVE - 5.2970 - 10.5358 - 10.3561 2 10.5364
STD 1.4993 6.20E-04 0.9873 2.60E206
BEST - 10.3239 - 10.5364 - 10.5364 2 10.5364
The best values obtained are in bold

of QOHHO and OHHO. For several fixed-dimension cosine algorithm (SCA) (Mirjalili 2016), salp swarm
multimodal functions, QRHHO accelerates the conver- algorithm (SSA) (Mirjalili et al. 2017) and multi-verse
gence of these test functions compared with HHO (Fig. 5). optimizer (MVO) (Mirjalili et al. 2015). For a fair com-
Hence, we conclude that in general the proposed QRHHO parison, the maximum number of iterations of the seven
algorithm shows its better solution accuracy and conver- algorithms is set to 500, and the size of population is set to
gence speed than the other three algorithms in most 30. The other parameters are listed in Table 9.
benchmark functions. The results obtained are shown in Tables 10, 11 and 12.
From Table 10, for the 30-dimensional unimodal and
4.3 Comparison with other swarm-based multimodal functions, QRHHO performs better than other
intelligent algorithms algorithms on most functions because of its minimum AVE
and STD. As can be seen from Table 11, QRHHO out-
To further verify the superiority of the QRHHO algorithm, performs other algorithms in all 100-dimensional unimodal
we compare it with the other six well-known optimization and multimodal functions. From Table 12, for the fixed-
algorithms. These six algorithms are: whale optimization dimension functions F14–F15 and F21–F23, QRHHO
algorithm (WOA) (Mirjalili and Lewis 2016), particle achieves BEST for each function, but other algorithms
swarm optimization (PSO) (Kennedy and Eberhart 2002), cannot get it. For the other functions F16–F20, these
grey wolf optimizer (GWO) (Mirjalili et al. 2014), sine

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

Fig. 3 Comparison of the convergence curves for QRHHO and the other three algorithms obtained in some of the 30-dimensional benchmark
functions

Fig. 4 Comparison of the convergence curves for QRHHO and the other three algorithms obtained in some of the 100-dimensional benchmark
functions

algorithms can obtain the optimal solution because the These results demonstrate the superiority and stability of
function problems relatively are not too difficult. the QRHHO algorithm for solving global optimization

123
Q. Fan et al.

Fig. 5 Comparison of the convergence curves for QRHHO and the other three algorithms obtained in some of the fixed-dimension benchmark
functions

solution and selects the better one from the two solutions.
Table 9 Parameter settings
With successive iterative calculation, QOBL expands the
Algorithm Parameter Value search space continuously, which makes QRHHO con-
PSO Maximum inertia weight (wmax ) 0.09
verging to the satisfied solution faster. As shown in the
Minimum inertia weight (wmin ) 0.04
second competitive experiment, QRHHO also takes the
lead in comparison with the other six well-known algo-
Maximum velocity (vmax ) 1
rithms. The optimization results further demonstrate the
Minimum velocity (vmax ) -1
superiority of the QRBL mechanism.
Cognitive coefficient(c1 ) 2
Nevertheless, it should be pointed out that QRHHO
Cognitive coefficient(c2 ) 2
cannot be regarded as a universal optimization algorithm
GWO Convergence constant (a) [0, 2]
according to the no free lunch theorem. For example, on
MVO Maximum of wormhole existence probability 1
unimodal functions F5 and F6, together with multimodal
Minimum of wormhole existence probability 0.2
functions F12 and F13, the convergence accuracy of
SCA Convergence constant (r1 ) [0, 2]
QRHHO is a little worse than that of the basic HHO.
SSA Coefficient (c1 ) [2/e, 2]
Consequently, QRHHO also has some limitations and
WOA Convergence constant (a) [0, 2]
needs further testing and application.
Coefficient (b) 1

5 Conclusion
problems, and QRHHO is also suitable for solving high- This paper presents a novel quasi-reflected Harris hawks
dimensional optimization problems. optimization (QRHHO) algorithm, which combines the
standard Harris hawks optimization algorithm and the
4.4 Discussion on the performance of QRHHO quasi-reflection-based learning (QRBL) mechanism. The
proposed method mainly applies the QRBL mechanism to
From the first competitive experiment, we observe that the the initial phase, and each population updates phase in the
optimization results of QRHHO are not only better than the optimization process. In order to verify the performance of
basic HHO, but also better than OHHO and QOHHO on QRHHO, twenty-three benchmark functions of various
most benchmark functions. The three variants of HHO are types and dimensions have been evaluated to analyze the
improved by three corresponding OBL mechanisms. exploration capability, the exploitation capability and the
Obviously, the difference of the three OBL mechanisms convergence behavior of the proposed algorithm. The
leads to the performance difference of the three variants. experimental results show that the performance of the
As for the comparison of the three mechanisms, Ewees QRHHO algorithm is better than the basic HHO, two
et al. (2018b) have proved that QRBL was more likely to variants of HHO algorithm and other six swarm-based
obtain accurate solutions than OBL and QOBL. In the intelligent algorithms. The QRBL mechanism helps HHO
optimization process, QRHHO uses the QRBL mechanism to find the optimal solution faster. In the future work, the
to calculate the quasi-reflection solution of the candidate proposed QRHHO is expected to be applied to practical

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

Table 10 Results of 30-dimensional unimodal and multimodal benchmark functions (F1–F13)


Function Index PSO GWO MVO WOA SSA SCA QRHHO

F1 AVE 1.61E-04 2.46E-27 1.22E?00 7.27E-75 1.18E-06 1.55E?01 0.00E100


STD 3.20E-04 6.44E-27 3.33E-01 2.34E-74 4.55E-06 4.17E?01 0.00E100
BEST 6.16E-06 1.52E-29 7.24E-01 1.02E-86 3.46E-08 6.98E-02 0.00E100
F2 AVE 8.09E?00 8.84E-17 6.73E?00 4.97E-50 2.42E?00 2.56E-02 3.55E2268
STD 8.00E?00 5.42E-17 1.95E?01 2.71E-49 2.30E?00 4.28E-02 0.00E100
BEST 5.56E-03 1.01E-17 4.67E-01 7.25E-57 2.01E-01 5.68E-05 2.80E2286
F3 AVE 9.27E?01 9.13E-06 2.19E?02 4.52E?04 1.48E?03 8.92E?03 0.00E100
STD 3.39E?01 1.99E-05 7.98E?01 1.41E?04 8.22E?02 5.74E?03 0.00E100
BEST 3.67E?01 3.27E-09 5.79E?01 1.84E?04 5.32E?02 1.10E?03 0.00E100
F4 AVE 1.06E?00 9.43E-07 1.89E?00 5.27E?01 1.11E?01 3.59E?01 2.07E2244
STD 2.34E-01 1.15E-06 6.35E-01 2.48E?01 3.93E?00 1.39E?01 0.00E100
BEST 5.87E-01 6.15E-08 6.33E-01 3.13E?00 5.86E?00 1.21E?01 5.42E2266
F5 AVE 1.09E?02 2.70E?01 2.37E?02 2.82E?01 2.40E?02 7.50E?04 1.16E201
STD 1.06E?02 7.68E-01 2.66E?02 5.14E-01 3.84E?02 2.35E?05 1.06E201
BEST 2.44E?01 2.61E?01 3.45E?01 2.72E?01 2.48E?01 6.64E?01 1.04E202
F6 AVE 1.44E-04 6.96E-01 1.28E?00 4.50E-01 2.76E207 6.32E?01 1.25E-03
STD 1.69E-04 3.70E-01 3.15E-01 2.36E-01 5.76E207 1.62E?02 7.86E-04
BEST 5.38E-06 9.45E-05 7.42E-01 6.45E-02 2.49E208 4.50E?00 2.64E-04
F7 AVE 3.97E?00 1.92E-03 3.53E-02 6.20E-03 1.75E-01 1.48E-01 8.04E205
STD 6.04E?00 8.32E-04 1.32E-02 6.83E-03 6.26E-02 2.37E-01 5.36E205
BEST 9.37E-02 4.67E-04 1.58E-02 1.19E-04 5.14E-02 1.20E-02 4.19E206
F8 AVE - 4.41E?03 - 6.02E?03 - 7.54E?03 - 1.01E?04 - 7.43E?03 - 3.80E?03 2 1.24E104
STD 1.12E?03 7.74E?02 6.28E?02 1.69E?03 6.58E?02 3.44E?02 6.17E102
BEST - 7.09E?03 - 7.40E?03 - 9.24E?03 - 1.26E?04 - 9.05E?03 - 4.87E?03 2 1.26E104
F9 AVE 1.05E?02 1.76E?00 1.13E?02 0.00E?00 5.38E?01 3.82E?01 0.00E100
STD 3.19E?01 2.98E?00 2.80E?01 0.00E?00 2.02E?01 3.75E?01 0.00E100
BEST 5.03E?01 5.68E-14 6.42E?01 0.00E?00 1.89E?01 2.75E-02 0.00E100
F10 AVE 2.36E-01 1.01E-13 1.78E?00 4.09E-15 2.72E?00 1.58E?01 8.88E216
STD 5.17E-01 1.65E-14 6.14E-01 2.70E-15 8.36E-01 7.78E?00 0.00E100
BEST 2.54E-03 7.55E-14 4.60E-01 8.88E-16 1.34E?00 4.96E-02 8.88E216
F11 AVE 9.96E-03 2.70E-03 8.57E-01 4.80E-03 2.25E-02 9.26E-01 0.00E100
STD 9.49E-03 7.45E-03 9.30E-02 2.63E-02 1.20E-02 3.54E-01 0.00E100
BEST 2.50E-06 0.00E?00 5.43E-01 0.00E?00 1.50E-03 1.04E-01 0.00E100
F12 AVE 1.04E-02 4.71E-02 2.56E?00 2.41E-02 7.38E?00 1.33E?04 9.37E205
STD 3.16E-02 2.34E-02 1.77E?00 2.09E-02 2.74E?00 4.57E?04 4.81E205
BEST 7.80E208 1.34E-02 5.22E-01 6.43E-03 2.80E?00 1.25E?00 2.61E-05
F13 AVE 5.55E-03 7.08E-01 2.21E-01 5.68E-01 1.56E?01 5.15E?04 1.42E203
STD 5.61E-03 2.19E-01 1.53E-01 3.23E-01 1.45E?01 1.92E?05 5.11E204
BEST 9.82E207 3.72E-01 4.46E-02 1.05E-01 5.03E-02 2.59E?00 6.28E-04
The best values obtained are in bold

123
Q. Fan et al.

Table 11 Results of 100-dimensional unimodal and multimodal benchmark functions (F1–F13)


Function Index PSO GWO MVO WOA SSA SCA QRHHO

F1 AVE 2.35E?01 1.60E-12 1.69E?02 1.43E-72 1.32E?03 1.07E?04 0.00E100


STD 7.24E?00 1.15E-12 2.64E?01 4.67E-72 3.45E?02 7.58E?03 0.00E100
BEST 8.87E?00 2.62E-13 1.10E?02 6.96E-84 6.59E?02 7.47E?02 0.00E100
F2 AVE 1.23E?02 4.68E-08 2.74E?23 2.93E-51 4.66E?01 5.71E?00 5.03E2253
STD 2.92E?01 1.96E-08 1.39E?24 1.20E-50 6.23E?00 3.70E?00 0.00E100
BEST 7.44E?01 2.10E-08 2.86E?02 9.47E-56 3.32E?01 7.24E-01 2.00E2272
F3 AVE 1.76E?04 5.90E?02 6.70E?04 1.14E?06 5.35E?04 2.34E?05 0.00E100
STD 4.24E?03 7.79E?02 8.86E?03 3.09E?05 2.07E?04 5.95E?04 0.00E100
BEST 9.17E?03 1.19E?01 4.65E?04 7.47E?05 2.25E?04 1.42E?05 0.00E100
F4 AVE 1.22E?01 6.89E-01 5.68E?01 7.38E?01 2.86E?01 8.93E?01 2.04E2233
STD 1.46E?00 5.90E-01 6.73E?00 2.38E?01 3.58E?00 2.49E?00 0.00E100
BEST 9.27E?00 1.08E-01 4.67E?01 9.28E?00 2.06E?01 8.21E?01 3.98E2252
F5 AVE 1.83E?04 9.78E?01 1.45E?04 9.81E?01 1.75E?05 1.10E?08 8.05E201
STD 8.05E?03 6.85E-01 1.25E?04 2.40E-01 9.54E?04 4.79E?07 5.72E201
BEST 6.87E?03 9.61E?01 2.95E?03 9.74E?01 4.86E?04 2.39E?07 1.02E201
F6 AVE 1.95E?01 1.02E?01 1.65E?02 4.68E?00 1.44E?03 1.14E?04 1.57E202
STD 4.55E?00 9.06E-01 2.77E?01 1.06E?00 5.05E?02 9.22E?03 6.44E203
BEST 1.22E?01 8.55E?00 1.18E?02 2.63E?00 6.64E?02 1.30E?03 2.86E203
F7 AVE 3.12E?02 7.87E-03 6.78E-01 3.85E-03 2.65E?00 1.23E?02 8.86E205
STD 1.18E?02 2.39E-03 1.54E-01 3.77E-03 5.48E-01 6.85E?01 8.12E205
BEST 1.23E?02 3.40E-03 4.45E-01 1.92E-05 1.59E?00 2.42E?01 7.16E206
F8 AVE - 1.11E?04 - 1.61E?04 - 2.30E?04 - 3.46E?04 - 2.15E?04 - 7.05E?03 2 4.17E104
STD 3.46E?03 2.82E?03 1.51E?03 6.05E?03 1.58E?03 5.02E?02 1.47E102
BEST - 1.73E?04 - 1.92E?04 - 2.63E?04 - 4.19E?04 - 2.45E?04 - 8.46E?03 2 4.19E104
F9 AVE 7.56E?02 9.94E?00 7.07E?02 3.79E-15 2.40E?02 2.45E?02 0.00E100
STD 7.89E?01 7.75E?00 8.61E?01 2.08E-14 4.04E?01 1.10E?02 0.00E100
BEST 5.77E?02 1.69E-11 5.58E?02 0.00E?00 1.43E?02 7.40E?01 0.00E100
F10 AVE 3.58E?00 1.11E-07 8.61E?00 4.80E-15 9.88E?00 1.92E?01 8.88E216
STD 3.03E-01 4.09E-08 6.49E?00 2.85E-15 1.20E?00 3.83E?00 0.00E100
BEST 2.83E?00 5.14E-08 4.26E?00 8.88E-16 7.59E?00 4.90E?00 8.88E216
F11 AVE 4.18E-01 3.68E-03 2.49E?00 2.34E-02 1.50E?01 8.08E?01 0.00E100
STD 7.60E-02 9.65E-03 2.86E-01 8.94E-02 4.96E?00 5.94E?01 0.00E100
BEST 2.75E-01 1.20E-13 2.11E?00 0.00E?00 7.56E?00 2.72E?00 0.00E100
F12 AVE 4.62E?00 3.15E-01 2.00E?01 4.25E-02 3.14E?01 3.10E?08 1.35E204
STD 1.44E?00 7.65E-02 7.78E?00 1.59E-02 1.05E?01 1.79E?08 6.00E205
BEST 2.66E?00 2.04E-01 9.87E?00 1.49E-02 1.35E?01 4.11E?07 4.69E205
F13 AVE 5.82E?01 6.82E?00 1.78E?02 2.81E?00 9.55E?03 4.94E?08 5.69E203
STD 1.66E?01 3.15E-01 2.80E?01 9.85E-01 1.44E?04 2.34E?08 2.13E203
BEST 2.90E?01 6.10E?00 1.09E?02 1.23E?00 2.05E?02 5.53E?07 1.89E204
The best values obtained are in bold

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

Table 12 Results of fixed-dimension benchmark functions (F14–F23)


Function Index PSO GWO MVO WOA SSA SCA QRHHO

F14 AVE 3.10E?00 4.69E?00 9.98E201 2.54E?00 1.30E?00 1.66E?00 1.06E?00


STD 2.40E?00 4.27E?00 2.61E211 2.51E?00 5.31E-01 9.51E-01 2.52E-01
BEST 9.98E-01 9.98E-01 9.98E201 9.98E-01 9.98E-01 9.98E-01 9.98E-01
F15 AVE 5.87E-03 8.38E-03 6.15E-03 7.64E-04 3.52E-03 1.02E-03 3.12E204
STD 8.25E-03 9.95E-03 8.72E-03 5.35E-04 6.73E-03 3.90E-04 4.79E206
BEST 6.14E-04 3.08E-04 5.66E-04 3.08E-04 5.28E-04 3.98E-04 3.08E204
F16 AVE - 1.03E?00 - 1.03E?00 - 1.03E?00 - 1.03E?00 - 1.03E?00 - 1.03E?00 2 1.03E100
STD 6.52E-16 2.52E-08 5.48E-07 1.11E-09 1.96E-14 5.59E-05 6.53E213
BEST - 1.03E?00 - 1.03E?00 - 1.03E?00 - 1.03E?00 - 1.03E?00 - 1.03E?00 2 1.03E100
F17 AVE 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 4.00E-01 3.98E201
STD 0.00E?00 3.45E-06 8.49E-07 6.34E-06 9.45E-14 1.94E-03 1.29E206
BEST 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E-01 3.98E201
F18 AVE 3.00E?00 3.00E?00 3.00E?00 3.00E?00 3.00E?00 3.00E?00 3.00E100
STD 1.55E215 4.02E-05 3.22E-06 1.03E-04 8.89E-14 1.07E-04 6.09E-07
BEST 3.00E?00 3.00E?00 3.00E?00 3.00E?00 3.00E?00 3.00E?00 3.00E100
F19 AVE - 3.86E?00 - 3.86E?00 - 3.86E?00 - 3.85E?00 - 3.86E?00 - 3.86E?00 2 3.86E100
STD 2.68E215 1.79E-03 4.17E-06 9.11E-03 3.25E-11 2.41E-03 3.93E-08
BEST - 3.86E?00 - 3.86E?00 - 3.86E?00 - 3.86E?00 - 3.86E?00 - 3.86E?00 2 3.86E100
F20 AVE - 3.17E?00 - 3.26E?00 - 3.27E?00 - 3.20E?00 - 3.22E?00 - 2.89E?00 2 3.27E100
STD 3.33E-01 7.19E-02 6.14E-02 1.17E-01 5.47E202 4.13E-01 7.71E-02
BEST - 3.32E?00 - 3.32E?00 - 3.32E?00 - 3.32E?00 - 3.32E?00 - 3.15E?00 2 3.32E100
F21 AVE - 6.86E?00 - 8.80E?00 - 7.47E?00 - 8.27E?00 - 6.81E?00 - 2.76E?00 2 1.02E101
STD 3.05E?00 2.28E?00 3.23E?00 2.73E?00 3.50E?00 2.07E?00 7.18E206
BEST - 1.02E?01 - 1.02E?01 - 1.02E?01 - 1.02E?01 - 1.02E?01 - 6.98E?00 2 1.02E101
F22 AVE - 9.36E?00 - 1.01E?01 - 8.25E?00 - 7.72E?00 - 9.54E?00 - 3.95E?00 2 1.02E101
STD 2.41E?00 1.42E?00 3.18E?00 3.41E?00 2.28E?00 2.29E?00 9.70E201
BEST - 1.04E?01 - 1.04E?01 - 1.04E?01 - 1.04E?01 - 1.04E?01 - 9.37E?00 2 1.04E101
F23 AVE - 9.38E?00 - 1.01E?01 - 9.65E?00 - 7.11E?00 - 7.70E?00 - 3.67E?00 2 1.05E101
STD 2.37E?00 1.75E?00 2.35E?00 3.37E?00 3.60E?00 1.63E?00 2.60E206
BEST - 1.05E?01 - 1.05E?01 - 1.05E?01 - 1.05E?01 - 1.05E?01 - 5.57E?00 2 1.05E101
The best values obtained are in bold

engineering optimization problems, such as parameter References


optimization, image processing, data mining and feature
selection. Abbasi A, Firouzi B, Sendur P (2019) On the application of Harris
hawks optimization (HHO) algorithm to the design of
Acknowledgements This paper is supported by National Natural microchannel heat sinks. Eng Comput. https://doi.org/10.1007/
Science Foundation of China (No. 41404008), Open Foundation of s00366-019-00892-0
Key Laboratory for Digital Land and Resources of Jiangxi Province Abd Elaziz M, Oliva D (2018) Parameter estimation of solar cells
(No. DLLJ201911) and Guiding Project of Fujian Science and diode models by an improved opposition-based whale optimiza-
Technology Program (No. 2018Y0021). tion algorithm. Energy Convers Manag 171:1843–1859. https://
doi.org/10.1016/j.enconman.2018.05.062
Abd Elaziz M, Oliva D, Xiong S (2017) An improved opposition-
Compliance with ethical standards based sine cosine algorithm for global optimization. Expert Syst
Appl 90:484–500. https://doi.org/10.1016/j.eswa.2017.07.043
Conflict of interest The authors declared that they have no conflict of da Silveira LA, Soncco-Álvarez JL, de Lima TA, Ayala-Rincón M
interest to this work. (2016) Memetic and opposition-based learning genetic algo-
rithms for sorting unsigned genomes by translocations. In:
Advances in nature and biologically inspired computing.

123
Q. Fan et al.

Springer, pp 73–85. https://doi.org/10.1007/978-3-319-27400-3_ Karaboga D, Basturk B (2007) A powerful and efficient algorithm for
7 numerical function optimization: artificial bee colony (ABC)
Das S, Bhattacharya A, Chakraborty AK (2016) Quasi-reflected ions algorithm. J Glob Optim 39:459–471. https://doi.org/10.1007/
motion optimization algorithm for short-term hydrothermal s10898-007-9149-x
scheduling. Neural Comput Appl 29:123–149. https://doi.org/ Kennedy J, Eberhart R (2002) Particle swarm Optimization. In:
10.1007/s00521-016-2529-8 Icnn95-international conference on neural networks. https://doi.
Das S, Bhattacharya A, Chakraborty AK (2017) Solution of short- org/10.1109/icnn.1995.488968
term hydrothermal scheduling problem using quasi-reflected Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by
symbiotic organisms search algorithm considering multi-fuel simulated annealing. Science 220:671–680. https://doi.org/10.
cost characteristics of thermal generator. Arab J Sci Eng 1126/science.220.4598.671
43:2931–2960. https://doi.org/10.1007/s13369-017-2973-5 Kurtuluş E, Yildiz A, Sait SM (2020) A novel hybrid Harris hawks-
Dinkar SK, Deep K (2018) An efficient opposition based Lévy Flight simulated annealing algorithm and RBF-based metamodel for
Antlion optimizer for optimization problems. J Comput Sci design optimization of highway guardrails. Mater Test 62:1–15.
29:119–141. https://doi.org/10.1016/j.jocs.2018.10.002 https://doi.org/10.3139/120.111478
Ergezer M, Simon D, Du D (2009) Oppositional biogeography-based Mirjalili S (2015a) The ant lion optimizer. Adv Eng Softw 83:80–98.
optimization. In: 2009 IEEE international conference on sys- https://doi.org/10.1016/j.advengsoft.2015.01.010
tems, man and cybernetics, 2009. IEEE, pp 1009–1014. https:// Mirjalili S (2015b) Moth-flame optimization algorithm: a novel
doi.org/10.1109/ICSMC.2009.5346043 nature-inspired heuristic paradigm. Knowl-Based Syst
Ewees AA, Abd Elaziz M, Houssein EH (2018a) Improved 89:228–249. https://doi.org/10.1016/j.knosys.2015.07.006
grasshopper optimization algorithm using opposition-based Mirjalili S (2016) SCA: a sine cosine algorithm for solving
learning. Expert Syst Appl 112:156–172. https://doi.org/10. optimization problems. Knowl-Based Syst 96:120–133. https://
1016/j.eswa.2018.06.023 doi.org/10.1016/j.knosys.2015.12.022
Ewees AA, Elaziz MA, Houssein EH (2018b) Improved grasshopper Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv
optimization algorithm using opposition-based learning. Expert Eng Softw 95:51–67. https://doi.org/10.1016/j.advengsoft.2016.
Syst Appl 112:S0957417418303701. https://doi.org/10.1016/j. 01.008
eswa.2018.06.023 Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer
Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired advances in engineering software. Renew Sustain Energy Rev
optimization algorithm. Commun Nonlinear Sci 17:4831–4845. 69:46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007
https://doi.org/10.1016/j.cnsns.2012.05.010 Mirjalili S, Mirjalili SM, Hatamlou A (2015) Multi-Verse optimizer:
Guha D, Roy PK, Banerjee S (2016) Load frequency control of large a nature-inspired algorithm for global optimization. Neural
scale power system using quasi-oppositional grey wolf opti- Comput Appl 27:495–513. https://doi.org/10.1007/s00521-015-
mization algorithm. Eng Sci Technol Int J 19:1693–1713. https:// 1870-7
doi.org/10.1016/j.jestch.2016.07.004 Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili
Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S SM (2017) Salp swarm algorithm: a bio-inspired optimizer for
(2019) Henry gas solubility optimization: a novel physics-based engineering design problems. Adv Eng Softw 114:163–191.
algorithm. Future Gener Comput Syst 101:646–667. https://doi. https://doi.org/10.1016/j.advengsoft.2017.07.002
org/10.1016/j.future.2019.07.015 Pan WT (2012) A new fruit fly optimization algorithm: taking the
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) financial distress model as an example. Knowl-Based Syst
Harris hawks optimization: algorithm and applications. Future 26:69–74. https://doi.org/10.1016/j.knosys.2011.07.001
Gener Comput Syst 97:849–872. https://doi.org/10.1016/j.future. Qais MH, Hasanien HM, Alghuwainem S (2020) Parameters extrac-
2019.02.028 tion of three-diode photovoltaic model using computation and
Holland JH (1992) Genetic algorithms. Sci Am 267:66–73. https:// Harris Hawks optimization. Energy 195:117040. https://doi.org/
doi.org/10.1038/scientificamerican0792-66 10.1016/j.energy.2020.117040
Houssein EH, Hosney ME, Oliva D, Mohamed WM, Hassaballah M Rahnamayan S, Tizhoosh HR, Salama MM (2007) Quasi-oppositional
(2020a) A novel hybrid Harris hawks optimization and support differential evolution. In: 2007 ieee congress on evolutionary
vector machines for drug design and discovery. Comput Chem computation, 2007. IEEE, pp 2229–2236. https://doi.org/10.
Eng 133:106656. https://doi.org/10.1016/j.compchemeng.2019. 1109/cec.2007.4424748
106656 Rahnamayan S, Tizhoosh HR, Salama MM (2008) Opposition-based
Houssein EH, Saad MR, Hussain K, Zhu W, Shaban H, Hassaballah differential evolution. IEEE Trans Evol Comput 12:64–79.
M (2020b) Optimal sink node placement in large scale wireless https://doi.org/10.1109/TEVC.2007.894200
sensor networks based on Harris’ hawk optimization algorithm. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravita-
IEEE Access 8:19381–19397. https://doi.org/10.1109/ACCESS. tional search algorithm. Inf Sci 179(13):2232–2248. https://doi.
2020.2968981 org/10.1016/j.ins.2009.03.004
Hussain K, Zhu W, Salleh MNM (2019) Long-term memory Harris’ Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation
hawk optimization for high dimensional and optimal power flow algorithm: theory and application. Adv Eng Softw 105:30–47.
problems. IEEE Access 7:147596–147616. https://doi.org/10. https://doi.org/10.1016/j.advengsoft.2017.01.004
1109/ACCESS.2019.2946664 Sharma S, Bhattacharjee S, Bhattacharya A (2016) Quasi-opposi-
Jia H, Lang C, Oliva D, Song W, Peng X (2019) Dynamic Harris tional swine influenza model based optimization with quarantine
hawks optimization with mutation mechanism for satellite image for optimal allocation of DG in radial distribution network. Int J
segmentation. Remote Sens 11:1421. https://doi.org/10.3390/ Electr Power Energy Syst 74:348–373. https://doi.org/10.1109/
rs11121421 CEC.2007.4424748
Kamboj VK, Nandi A, Bhadoria A, Sehgal S (2020) An intensify Shehabeldeen TA, Elaziz MA, Elsheikh AH, Zhou J (2019) Modeling
Harris Hawks optimizer for numerical and engineering opti- of friction stir welding process using adaptive neuro-fuzzy
mization problems. Appl Soft Comput 89:106018. https://doi. inference system integrated with Harris hawks optimizer.
org/10.1016/j.asoc.2019.106018 J Mater Res Technol 8:5882–5892. https://doi.org/10.1016/j.
jmrt.2019.09.060

123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems

Shiva CK, Mukherjee V (2015) A novel quasi-oppositional harmony dragonfly algorithm for structural design optimization of vehicle
search algorithm for automatic generation control of power components. Mater Test 8:60–70. https://doi.org/10.3139/120.
system. Appl Soft Comput 35:749–765. https://doi.org/10.1016/ 111379
j.asoc.2015.05.054 Yildiz A, Mirjalili S, Sait SM, Li X (2019) The Harris hawks,
Storn R, Price K (1997) Differential evolution–a simple and efficient grasshopper and multi-verse optimization algorithms for the
heuristic for global optimization over continuous spaces. J Glob selection of optimal machining parameters in manufacturing
Optim 11:341–359. https://doi.org/10.1023/A:1008202821328 operations. Mater Test 8:1–15. https://doi.org/10.3139/120.
Tizhoosh HR (2005) Opposition-based learning: a new scheme for 111377
machine intelligence. In: International conference on computa- Yıldız AR, Yıldız BS, Sait SM, Bureerat S, Pholdee N (2019) A new
tional intelligence for modelling, control and automation and hybrid Harris hawks-Nelder-Mead optimization algorithm for
international conference on intelligent agents, web technologies solving design and manufacturing problems. Mater Test
and internet commerce (CIMCA-IAWTIC’06), 2005. IEEE, 61:735–743. https://doi.org/10.3139/120.111378
pp 695–701. https://doi.org/10.1109/cimca.2005.1631345 Yildiz BS, Yildiz AR, Pholdee N, Bureerat S, Sait SM, Patel V (2020)
Too J, Abdullah AR, Mohd Saad N (2019) A new quadratic binary The Henry gas solubility optimization algorithm for optimum
harris hawk optimization for feature selection. Electronics structural design of automobile brake components. Mater Test
8:1130. https://doi.org/10.3390/electronics8101130 62:5–25. https://doi.org/10.3139/120.111479
Truong KH, Nallagownden P, Baharudin Z, Vo DN (2019) A quasi- Yousri D, Allam D, Eteiba MB (2020) Optimal photovoltaic array
oppositional-chaotic symbiotic organisms search algorithm for reconfiguration for alleviating the partial shading influence based
global optimization problems. Appl Soft Comput 77:567–583. on a modified harris hawks optimizer. Energy Convers Manag
https://doi.org/10.1016/j.asoc.2019.01.043 206:112470. https://doi.org/10.1016/j.enconman.2020.112470
Xin Y, Yong L, Lin G (1999) Evolutionary programming made faster.
IEEE Trans Evol Comput 3:82–102. https://doi.org/10.1109/ Publisher’s Note Springer Nature remains neutral with regard to
4235.771163 jurisdictional claims in published maps and institutional affiliations.
Yildiz A, Yildiz BS (2019) The Harris hawks optimization algorithm,
salp swarm algorithm, grasshopper optimization algorithm and

123

You might also like