10.1007@s00500 020 04834 7
10.1007@s00500 020 04834 7
https://doi.org/10.1007/s00500-020-04834-7 (0123456789().,-volV)(0123456789().
,- volV)
Abstract
Harris hawks optimization (HHO) is a recently developed meta-heuristic optimization algorithm based on hunting behavior
of Harris hawks. Similar to other meta-heuristic algorithms, HHO tends to be trapped in low diversity, local optima and
unbalanced exploitation ability. In order to improve the performance of HHO, a novel quasi-reflected Harris hawks
algorithm (QRHHO) is proposed, which combines HHO algorithm and quasi-reflection-based learning mechanism (QRBL)
together. The improvement includes two parts: the QRBL mechanism is introduced firstly to increase the population
diversity in the initial stage, and then, QRBL is added in each population position update to improve the convergence rate.
The proposed method will also be helpful to control the balance between exploration and exploitation. The performance of
QRHHO has been tested on twenty-three benchmark functions of various types and dimensions. Through comparison with
the basic HHO, HHO combined with opposition-based learning mechanism and HHO combined with quasi-opposition-
based learning mechanism, the results demonstrate that QRHHO can effectively improve the convergence speed and
solution accuracy of the basic HHO and two variants of HHO. At the same time, QRHHO is also better than other swarm-
based intelligent algorithms.
123
Q. Fan et al.
benchmark functions. Currently, it has been applied to modifying HHO with a long-term memory concept. The
several disciplines design and engineering optimization comparison results showed that LHHO could maintain
problems. For example, Qais et al. (2020) applied HHO to exploration up to a certain level and achieve better results
extract the unknown parameters of the three-diode photo- than HHO. Besides, Yıldız et al. (2019) combined HHO
voltaic model. The results revealed that the proposed with the Nelder–Mead local search algorithm and used the
method was efficient and could easily identify the electrical new algorithm to solve a milling manufacturing opti-
parameters of any commercial photovoltaic panel based on mization problem.
the datasheet values only. Abbasi et al. (et al. 2019) con- Although the above-mentioned variants of HHO are
firmed that HHO had a superior performance in minimizing better optimized for specific problems, they are not suit-
the entropy generation of the microchannel. Shehabeldeen able for most optimization problems. Therefore, it is
et al. (2019) found that HHO could efficiently search for meaningful to develop more efficient algorithm. This paper
optimal values of the adaptive neuro-fuzzy inference sys- puts forward a new quasi-reflected HHO algorithm
tem parameters and determine the optimal operating con- (QRHHO) that combines HHO and the quasi-reflection-
ditions of the friction stir welding process. Houssein et al. based learning (QRBL) mechanism to further improve the
(2020b) employed HHO to determine the sink node loca- basic HHO. QRBL is a variant of opposition-based learn-
tion in a large-scale wireless sensor network, and the ing (OBL), which is an effective intelligent optimization
method ultimately prolonged the lifetime of the network in technique. It has been utilized by meta-heuristic algorithms
an efficacious way. Yıldız et al. applied HHO to solve the such as biogeography-based optimization (BBO) (Ergezer
structural optimization problem of the vehicle component et al. 2009), ions motion optimization algorithm (IMO)
in industry (Yildiz and Yildiz 2019) and used HHO to (Das et al. 2016) and symbiotic organisms search algorithm
select optimal machining parameters in manufacturing (SOS) (Das et al. 2017). The algorithms that integrated the
operations (Yildiz et al. 2019). Moreover, Houssein et al. QRBL mechanism show better convergence speed than the
(2020a) hybridized HHO with the support vector machine original algorithms and can also avoid local optimization.
for the chemical descriptor selection and chemical com- To verify the effectiveness of the proposed QRHHO,
pound activities, from which a superior accuracy was twenty-three typical benchmark functions of various types
obtained compared with well-known algorithms. and dimensions are used for simulation experiments. Its
However, as a new swarm intelligence algorithm, HHO performance is then compared with the basic HHO algo-
is still in its infancy. Like other swarm intelligent opti- rithm and the other two improved HHO algorithms. The
mization algorithms, it is more difficult for HHO to achieve experimental results show that our proposed QRHHO is a
a balance between exploration and exploitation due to the novel powerful search algorithm with good convergence
randomness of the optimization process. Therefore, the speed and global optimization ability for various opti-
algorithm may result in low accuracy and slow conver- mization problems.
gence, as well as fall into local optimum. To overcome the The rest of the paper is organized as follows. In Sect. 2,
limitations of basic HHO, some scholars have made some the Harris hawks optimization algorithm is introduced.
improvements in different ways. For example, Jia et al. Section 3 presents the proposed QRHHO algorithm. In
(2019) introduced dynamic control parameter strategy and Sect. 4, the simulation experiments are implemented and
mutation mechanism to HHO to enhance the search effi- the results are analyzed in detail. Finally, the conclusions
ciency. Kamboj et al. (2020) proposed a hybrid HHO and and future work are discussed in Sect. 5.
SCA optimization algorithm, which was much better than
standard HHO, SCA and other optimization algorithm.
Yousri et al. (2020) presented a novel MHHO by modi- 2 The basic Harris hawks optimization
fying exploration phase and prey energy equation and algorithm
applied the method to optimize photovoltaic array recon-
figuration for alleviating the partial shading influence. Too Harris hawks optimization algorithm is a new population-
et al. (2019) proposed two binary versions of HHO, namely based, nature-inspired optimization algorithm that mimics
BHHO and QBHHO, to tackle the feature selection prob- Harris hawks’ behavior of searching and attacking prey. It
lem in classification tasks, which could achieve the highest mainly includes the exploratory and exploitative phases.
classification accuracy compared with other algorithms. With respect to HHO, the prey energy is proposed to
Kurtulus et al. (2020) hybridized HHO and simulated represent the escape of prey. Harris hawks adopt different
annealing to accelerating its global convergence perfor- strategies to catch prey according to the change of this
mance and firstly used the hybridized algorithm to optimize energy, which is modeled as:
the design parameters for highway guardrail systems.
Hussain et al. (2019) developed LHHO algorithm through
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
123
Q. Fan et al.
If the hawks see their prey running away, they make Algorithm 1: Harris’ hawks optimization algorithm
more deceptive movements and also begin to make irreg- Generate random population of hawks Xi (i =1,2,…,n)
ular, sudden and rapid dives as they approach rabbits. To Calculate the fitness values of each hawk
is the location of rabbit (best solution)
model this escaping behavior of the prey, the levy flight is
introduced during optimization. The rule is as follows while (t<maximum number of iterations)
for each hawk (Xi)
Z ¼ Y þ S LFðDÞ ð10Þ Update the initial energy E0 by the Eq. (2)
Update the prey energy E by the Eq. (1)
where D is the dimension of problem, S indicates a random
Update the jump strength J by the Eq. (7)
vector by size 1 9 D and LF is the levy flight function,
if (|E|≥1)
which is calculated as:
Update the position of the current solution by the Eq. (3)
ur end if
LFðxÞ ¼ 0:01 1 ð11Þ
jvjb if (|E|< 1)
1b1 if (r≥0.5 and |E|≥0.5 )
0
Cð1 þ bÞ sin pb2
Update the position of the current solution by the Eq. (5)
r¼@ b1
A ð12Þ else if (r≥0.5 and |E|< 0.5 )
C 2 b2 2 Þ
1þb ð Update the position of the current solution by the Eq. (8)
else if (r < 0.5 and |E|≥0.5 )
where u, v are random numbers in (0, 1), b represents a Update the position of the current solution by the Eq. (13)
default constant set to 1.5 here. Hence, the mathematical else if (r < 0.5 and |E|< 0.5 )
model can be described as: Update the position of the current solution by the Eq. (14)
end if
Y if FðYÞ\FðXðtÞÞ
Xðt þ 1Þ ¼ ð13Þ end for
Z if FðZÞ\FðXðtÞÞ Check if any solution goes beyond the search space and amend it
Calculate the fitness of each hawk
where Y and Z are obtained using Eqs. (9) and (10). During
If there is a better solution, update Xrabbit
each step, HHO only selects the better position Y or Z as
t=t+1
the next position. This strategy is applied to all search
end while
agents.
Return Xrabbit
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
xo ¼ lb þ ub x ð17Þ lb þ ub
xqr ¼ rand ;x ð21Þ
2
where x 2 R and x 2 ½lb; ub.This definition can be gener-
alized to d dimensions by using the following equation: where rand lbþ2ub ; x is a random number uniformly
xoi ¼ lbi þ ubi xi ð18Þ distributed between lbþub and x.
2
To make the above definitions more clear, Fig. 1 illus-
where xi 2 R and xi 2 ½lbi ; ubi 8i 2 1; 2; . . .; d.
trates a point x, its opposite point xo , its quasi-opposite
point xqo and its quasi-reflected point xqr .
3.2 Quasi-opposition-based learning The quasi-reflected number can also be extended to d-
dimensional space as follows:
A variant of OBL called quasi-opposition-based learning
(QOBL) has been proposed by (Rahnamayan et al. 2007). qr lbi þ ubi
xi ¼ rand ; xi ð22Þ
Previous research have already proved that using quasi- 2
opposite numbers was more effective than opposite num-
bers in finding the global optimal solution (Guha et al.
3.4 The proposed quasi-reflected Harris hawks
2016; Sharma et al. 2016; Shiva and Mukherjee 2015;
optimization (QRHHO) algorithm
Truong et al. 2019). On the basis of the opposite number,
the quasi-opposite number xqo of x is define by
Similar to other swarm intelligence algorithms, HHO may
qo lb þ ub o be trapped in local optimum and slow convergence.
x ¼ rand ;x ð19Þ
2 Therefore, the QRHHO algorithm is proposed to enhance
the performance of HHO, which is a combination of QRBL
where lbþ2ub represents the center of the interval ½lb; ub
and HHO. The proposed method consists of two stages:
and rand lbþ2ub ; xo is a random number uniformly dis- Firstly, the QRBL mechanism is applied to population
tributed between lbþ2ub and xo . initialization to improve the quality and diversity of the
initial population. Secondly, the QRBL strategy is added to
Similarly, the quasi-opposite number can also be
each population location update to improve the conver-
extended to d-dimensional space by using the following
gence rate.
equation:
In the initial phase, the QRHHO algorithm generates a
lbi þ ubi o
xqo ¼ rand ; x ð20Þ random population P0 ¼ Xij ; i ¼ 1; 2; . . .; N; j ¼
i i
2 1; 2; . . .; D. (N is the population size. D is the dimension of
the problem.) Then, QRBL is used to calculate the quasi-
3.3 Quasi-reflection-based learning reflective solution of each solution in the population, so as
n o
to obtain a quasi-reflective population Pqr qr
0 ¼ Xij ;
Based on OBL and QOBL, a new quasi-reflection-based i ¼ 1; 2; . . .; N; j ¼ 1; 2; . . .; D. The fitness values of two
learning mechanism (QRBL) was proposed in the literature populations are calculated and compared, among which the
(Ewees et al. 2018b). The quasi-reflected number xqr is best N individuals in the two populations will be selected as
obtained by reflecting the quasi-opposite number xqo , the initial population. The pseudocode is shown as follows:
which is defined by
Fig. 1 Opposite points defined in domain [lb, ub]. xo is the opposite point of x, xqo and xqr represent the quasi-opposite and quasi-reflected points,
respectively
123
Q. Fan et al.
QRBL initializes the population Step 1 Initialize parameters including population size N,
Generate random population of hawks P0 = X ij { } , i = 1, 2,..., N ; j = 1, 2,..., D maximum iteration number T, search dimension D, upper
bound UB and lower bound LB of search space;
Generate quasi-reflected population P0 = X ij
qr
{ qr } , i = 1, 2,..., N ; j = 1, 2,..., D Step 2 Generate randomly a population P0 in search
for i = 1: N space and use the QRBL method to obtain its quasi-re-
for j = 1: D flected population Pqr 0 . Then, select the optimal N individ-
Mj = Mj ( +Mj)/2 uals from the two populations as the initial population;
r is a random number in [0,1]
Step 3 Calculate the fitness value of each individual in
the current population and find the location with the best
if X i, j < M j
fitness value as the optimal location Xrabbit;
qr
X i, j = X i, j + ( M j − X i, j ) × r Step 4 Update the prey energy E. According to the value
of E, select one search strategy from Eqs. (2), (5), (8), (13)
else
and (14) to update the individual location of the current
qr
X i, j = M j + ( X i, j − M j ) × r
population;
end Step 5 Generate the quasi-reflected population Pqr of the
end current population P by using the QRBL strategy again and
end
Calculate and compare the fitness values of the two populations
then choose the best N individuals from the combination of
P and Pqr as the initial population for the next iteration;
Select the best N individuals from P0
qr
P0 as initial population
Step 6 Output the optimal individual if the number of
contemporary iterations reaches T. Otherwise, return to
step 3.
During the updating phase, hawks positions are updated
by using the standard HHO and a new population P is 3.5 Computational complexity of QRHHO
obtained. Then, the QRBL method is used again to gen-
erate the quasi-reflected population Pqr . According to the The computational complexity of HHO mainly consists of
fitness values of P and Pqr , QRHHO selects the best N in- three parts: initialization, fitness evaluation and population
dividuals from the two populations as the next initial updating, which can be calculated as follows:
population. Repeat the previous steps until the maximum
number of iterations is reached. The pseudocode is shown OðHHOÞ ¼ OðNÞþOðT NÞþOðT N DÞ ð23Þ
as below: The complexity of HHO will increase with the appli-
cation of QRBL method. In each iteration, QRBL performs
QRBL updates the current population operations on N individuals to produce N quasi-reflection
individuals and the algorithm calculates the fitness value of
Using HHO to generate population P = X ij { } , i = 1, 2,..., N ; j = 1, 2,..., D
the 2 N individuals. After ranking 2 N fitness values, the
Generate quasi-reflected population P
qr
{ qr }
= X ij , i = 1, 2,..., N ; j = 1, 2,..., D best N individuals will be selected to continue updating.
for i = 1: N The complexity of the process is OðN Dþ2Nþ2Nlg2NÞ.
for j = 1: D Since the QRBL method is added during the initial phases
M j = (M j +Mj)/2
and position updating phase, the whole computational
r is a random number in [0,1]
complexity of QRHHO is
if X <M
i, j j
OðQRHHOÞ ¼ OðN þðN Dþ2Nþ2Nlg2NÞÞþOðT NÞ
X
qr
i, j
(
= X + M j − X i, j × r
i, j ) þ OðT ðN DþðN Dþ2Nþ2Nlg2NÞÞÞ
else ¼ OðN ð3þDþ2lg2NþT ð3þ2Dþ2lg2NÞÞÞ
X
qr
i, j
( )
= M j + X i, j − M j × r
ð24Þ
end
end
According to the above analysis, it is obvious that the
end computational complexity of QRHHO is higher than that of
Calculate and compare the fitness values of the two populations the basic HHO.
qr
Select the best N individuals from P P as next initial population
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
4 Experiments and analysis 4.2 Comparison with basic HHO and two
variants of HHO
4.1 Benchmark test functions and parameter
settings To evaluate the advantage of the QRBL mechanism on the
benchmark test problems, we compare QRHHO with the
In this section, twenty-three benchmark functions (Xin basic HHO and two variants of HHO. The two variants are
et al. 1999) are selected to verify the effectiveness of the HHO with opposition-based learning (OHHO) and HHO
proposed QRHHO. These benchmark test functions are with quasi-opposition-based learning (QOHHO).
divided into three groups according to their characteristics: The computational results of 30-dimensional unimodal
unimodal, multimodal and fixed-dimension multimodal and multimodal functions are recorded in Tables 4 and 5,
functions. The unimodal functions (F1*F7) have only one respectively, while those for the 100-dimensional unimodal
optimal solution and are often used to test the exploitation and multimodal functions are shown in Tables 6 and 7.
capability of the algorithm. The second kind of test func- Finally, the results of the fixed-dimension multimodal
tions (F8*F13) is multimodal function, which is charac- functions are reported in Table 8. All the obtained results
terized by multiple optimal values. Since it is not easy to are presented in terms of the average value (AVE), the
find the global optimal solution for these functions, it can standard deviation (STD) and the best value (BEST),
be used to test the ability of the exploration and jumping among which better AVE shows that the performance of
out of local optimal. The last type is the fixed-dimension the algorithm is superior to the others, while smaller STD
multimodal function (F14*F23), which also has a large indicates that the algorithm is more stable. BEST repre-
number of optimal solutions. However, due to the low sents the best result found by this algorithm in the 30
dimension, it is easy to find the optimization, and thus, it simulation experiments.
can be used to test the stability of the algorithm. The
definition and description of each function are given in 4.2.1 Analysis of statistical results for unimodal benchmark
Tables 1, 2 and 3, where D represents the dimension of the functions
function, range is the boundary of the function’s search
space and fmin indicates the optimum. Typical 3D shapes of The unimodal function can be used to analyze the
some selected benchmark test functions are shown in exploitation ability of the optimization algorithm, since
Fig. 2. there is only one global optimal solution and no other local
Before starting the simulation experiment, the maximum optimal solution exists. When the dimension D is set to 30,
number of iterations T and the size of population N are set it can be clearly seen from Table 4 that the QRHHO
to 500 and 30, respectively. Then, each optimization algorithm is superior to the other algorithms in most test
algorithm runs 30 times on each benchmark function functions, especially for functions F1 and F3, of which the
independently. theoretical optimal values are accurately obtained using
QRHHO, while the other three algorithms cannot find the
Table 1 Description of
Function D Range fmin
unimodal benchmark functions
P
n 30/100 [- 100, 100]D 0
F1 ð xÞ ¼ x2i
i¼1
Pn Q
n 30/100 [- 10, 10]D 0
F2 ð xÞ ¼ jxi j jxi j
i¼1 i¼1
!2
P
n P
i 30/100 [- 100, 100]D 0
F3 ð xÞ ¼ xj
i¼1 j¼1
123
Q. Fan et al.
P
n pffiffiffiffiffiffi 30/100 [- 500, 500]D - 418.9829 9 D
F8 ð xÞ ¼ xi sin j xi j
i¼1
Pn 30/100 [- 5.12, 5.12]n 0
F9 ð xÞ ¼ x2i 10 cosð2pxi Þ þ 10
i¼1
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi!
1 Xn 2 30/100 [- 32, 32]D 0
F10 ð xÞ ¼ 20 exp 0:2 i¼1 i
x
n
X
1 n
exp i¼1
cos ð 2px i Þ þ 20 þ e
n
P
n Q
n 30/100 [- 600, 600]D 0
1
F11 ð xÞ ¼ 4000 x2i cosðpxiffiiÞ þ 1
i¼1 i¼1
pn Xn1 o
30/100 [- 50, 50]D 0
F12 ð xÞ ¼ 10sin2 ðpyi Þ þ i¼1
ðyi 1Þ2 1 þ 10 sin2 ðpyiþ1 Þ þ ðyn 1Þ2
n X
n
þ i¼1
uðxi ; 10; 100; 4Þ
8 m
< kðxi aÞ ; xi [ a
xi þ1
yi ¼ 1 þ 4 ; uðxi ; a; k; mÞ ¼ 0; a\xi \a
:
kðxi aÞm ; xi \ a
F13 ð xÞ ¼ 0:1 sin2 ð3pxi Þ 30/100 [- 50, 50]D 0
Xn1
þ i¼1
ðxi 1Þ2 1 þ sin2 ð3pxiþ1 Þ
þ ðxn 1Þ 1 þ sin2 ð2pxn Þ
Xn
þ i¼1
uðxi ; 5; 100; 4Þ
P
5 1 4 [0, 10]D - 10.1532
F21 ð xÞ ¼ ðX ai ÞðX ai ÞT þci
i¼1
P
7 1 4 [0, 10]D - 10.4028
F22 ð xÞ ¼ ðX ai ÞðX ai ÞT þci
i¼1
P
10 1 4 [0, 10]D - 10.5363
F23 ð xÞ ¼ ðX ai ÞðX ai ÞT þci
i¼1
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
Table 4 Results of
Function Index HHO OHHO QOHHO QRHHO
30-dimensional unimodal
benchmark functions (F1–F7) F1 AVE 4.59E-96 3.12E-320 0 0
STD 2.37E-95 0 0 0
BEST 2.06E-112 0 0 0
F2 AVE 4.22E-49 3.28E-175 1.91E-156 3.55E2268
STD 2.31E-48 0 1.04E-155 0
BEST 5.29E-57 1.23E-220 1.74E-186 2.80E2286
F3 AVE 5.04E-73 1.10E-163 5.44E-190 0
STD 2.76E-72 0 0 0
BEST 3.27E-98 0 3.84E-242 0
F4 AVE 6.34E-50 2.77E-148 1.33E-177 2.07E2244
STD 1.77E-49 1.52E-147 0 0
BEST 1.31E-56 6.56E-197 5.11E-190 5.42E2266
F5 AVE 1.15E-02 1.02E202 1.10E-01 1.16E-01
STD 2.36E-02 1.44E202 1.22E-01 1.06E-01
BEST 2.50E-05 2.45E207 4.44E-03 1.04E-02
F6 AVE 1.17E-04 9.46E205 1.51E-03 1.25E-03
STD 1.56E-04 1.07E204 6.67E-04 5.39E-04
BEST 4.29E209 1.16E-08 2.59E-04 2.64E-04
F7 AVE 1.91E-04 1.55E-04 1.00E-04 8.04E205
STD 2.22E-04 2.05E-04 8.75E-05 5.36E205
BEST 3.10E206 1.05E-05 6.53E-06 4.19E-06
The best values obtained are in bold
123
Q. Fan et al.
Table 5 Results of
Function Index HHO OHHO QOHHO QRHHO
30-dimensional multimodal
benchmark functions (F8–F13) F8 AVE - 1.26E104 - 1.26E104 - 1.21E104 - 1.26E104
STD 7.22E201 7.60E-01 1.16E?03 6.17E?02
BEST - 1.26E104 - 1.26E104 - 1.26E104 - 1.26E104
F9 AVE 0 0 0 0
STD 0 0 0 0
BEST 0 0 0 0
F10 AVE 8.88E-16 8.88E-16 8.88E-16 8.88E216
STD 0 0 0 0
BEST 8.88E-16 8.88E-16 8.88E-16 8.88E216
F11 AVE 0 0 0 0
STD 0 0 0 0
BEST 0 0 0 0
F12 AVE 1.07E-05 5.62E206 1.39E-04 9.37E-05
STD 1.41E-05 7.45E206 5.99E-05 4.81E-05
BEST 6.16E209 1.99E-07 6.06E-05 2.61E-05
F13 AVE 1.30E-04 6.53E205 2.91E-03 1.42E-03
STD 2.44E-04 7.06E205 5.95E-03 5.11E-04
BEST 1.37E207 2.02E-07 6.17E-05 6.28E-04
The best values obtained are in bold
Table 6 Results of
Function Index HHO OHHO QOHHO QRHHO
100-dimensional unimodal
benchmark functions (F1–F7) F1 AVE 4.14E-94 1.73E-298 2.07E-307 0
STD 2.22E-93 0 0 0
BEST 4.63E-110 0 0 0
F2 AVE 1.17E-49 1.90E-165 6.65E-123 5.03E2253
STD 5.21E-49 0.00E?00 3.46E-122 0
BEST 1.82E-57 5.80E-203 2.96E-155 2.00E2272
F3 AVE 1.91E-62 6.90E-115 6.14E-141 0
STD 1.04E-61 3.25E-114 3.37E-140 0
BEST 7.41E-87 4.18E-273 2.11E-185 0
F4 AVE 1.31E-46 2.85E-148 7.01E-148 2.04E2233
STD 7.04E-46 1.56E-147 3.18E-147 0
BEST 3.27E-57 2.01E-184 2.78E-165 3.98E2252
F5 AVE 4.11E202 7.78E-02 9.00E-01 8.05E-01
STD 5.77E202 1.81E-01 1.01E?00 5.72E-01
BEST 9.61E207 4.65E-05 3.10E-02 1.02E-01
F6 AVE 5.48E-04 4.57E204 1.16E-02 1.57E-02
STD 6.61E204 7.45E-04 5.29E-03 6.44E-03
BEST 1.79E-07 1.30E209 3.28E-03 2.86E-03
F7 AVE 1.37E-04 1.64E-04 9.67E-05 8.86E205
STD 1.36E-04 1.46E-04 1.02E-04 8.12E205
BEST 3.46E-06 1.12E-05 3.15E206 7.16E-06
The best values obtained are in bold
optimal solution. For functions F2, F4 and F7, the average algorithms. Compared with HHO, OHHO also achieves a
values of QRHHO decrease obviously, while for functions certain degree of improvement by applying the common
F5 and F6, OHHO obtains better solutions than the other OBL mechanism.
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
Table 7 Results of
Function Index HHO OHHO QOHHO QRHHO
100-dimensional multimodal
benchmark functions (F8–F13) F8 AVE - 4.19E104 - 4.19E?04 - 4.07E?04 - 4.15E?04
STD 4.39E101 4.48E?01 2.11E?03 1.47E?02
BEST - 4.19E?04 - 4.19E?04 - 4.19E?04 2 4.19E104
F9 AVE 0 0 0 0
STD 0 0 0 0
BEST 0 0 0 0
F10 AVE 8.88E-16 8.88E-16 8.88E-16 8.88E216
STD 0 0 0 0
BEST 8.88E-16 8.88E-16 8.88E-16 8.88E216
F11 AVE 0 0 0 0
STD 0 0 0 0
BEST 0 0 0 0
F12 AVE 4.44E-06 3.19E206 1.58E-04 1.25E-04
STD 5.07E-06 4.18E206 5.32E-05 6.00E-05
BEST 3.02E-09 1.31E208 5.65E-05 3.30E-05
F13 AVE 1.26E-04 9.51E205 4.07E-03 5.69E-03
STD 2.26E-04 8.87E205 2.61E-03 2.13E-03
BEST 2.15E-07 6.45E210 6.46E-04 1.31E-04
The best values obtained are in bold
When the search space dimension is increased to 100, 4.2.3 Analysis of statistical results for fixed-dimension
we can notice from Table 6 that QRHHO still achieves multimodal functions
better solutions than the other algorithms for functions F1–
F4 and F7. The HHO outperforms the other algorithms in Functions F14–F23 are multimodal functions with fixed
function F5, whereas, for function F6, HHO obtains the small dimensions, which can be used to test the stability
best STD, and OHHO gets the best AVE and BEST. and exploration ability of the algorithm. As can be seen
Hence, QRHHO still maintains the best search perfor- from Table 8, for all fixed-dimension multimodal func-
mance for most high-dimensional unimodal functions. This tions, the performance of QRHHO surpasses the basic
fully proves the superiority of the exploitation ability of HHO in an all-round way, and the solution results are very
QRHHO. close to the theoretical value on most functions. In par-
ticular, HHO cannot find theoretical values for functions
4.2.2 Analysis of statistical results for multimodal F21–F23, while QRHHO can find them easily. Compared
benchmark functions with OHHO and QOHHO, AVEs of QRHHO are in the
lead overall, and the STDs of QRHHO keep the minimum
Compared with the unimodal test function, the multimodal except for functions F18, F20 and F22. Therefore, it can be
test function has many optimal solutions, among which one concluded that QRHHO always remains the high stability
is global and the rest are local. These multimodal test and exploration ability on this type of benchmark
problems are generally used to evaluate the exploration functions.
ability of search algorithm. As shown in Table 5 and
Table 7, for function F8, the results of four algorithms are 4.2.4 Convergence analysis
very close. For functions F9–F11, each algorithm could
obtain the theoretical optimal solution. With respect to the The convergence curves of some functions are selected and
100-dimensional function F12 and F13, OHHO achieves shown in Figs. 3, 4 and 5. We notice that QRHHO algo-
better solutions than the other algorithms. rithm starts to converge from the previous generations in
For multimodal benchmark functions, however, the all unimodal functions (Figs. 3, 4). In particular, only
performance improvement in QRHHO is not very obvious. QRHHO could obtain the global optimal of functions F1
This is mainly due to the fact that the basic HHO has and F3. For the multimodal function F9–F11, the conver-
already provided very competitive results for these gence speed of QRHHO from the initial generation is much
functions. faster than that of HHO, and also slightly higher than that
123
Q. Fan et al.
of QOHHO and OHHO. For several fixed-dimension cosine algorithm (SCA) (Mirjalili 2016), salp swarm
multimodal functions, QRHHO accelerates the conver- algorithm (SSA) (Mirjalili et al. 2017) and multi-verse
gence of these test functions compared with HHO (Fig. 5). optimizer (MVO) (Mirjalili et al. 2015). For a fair com-
Hence, we conclude that in general the proposed QRHHO parison, the maximum number of iterations of the seven
algorithm shows its better solution accuracy and conver- algorithms is set to 500, and the size of population is set to
gence speed than the other three algorithms in most 30. The other parameters are listed in Table 9.
benchmark functions. The results obtained are shown in Tables 10, 11 and 12.
From Table 10, for the 30-dimensional unimodal and
4.3 Comparison with other swarm-based multimodal functions, QRHHO performs better than other
intelligent algorithms algorithms on most functions because of its minimum AVE
and STD. As can be seen from Table 11, QRHHO out-
To further verify the superiority of the QRHHO algorithm, performs other algorithms in all 100-dimensional unimodal
we compare it with the other six well-known optimization and multimodal functions. From Table 12, for the fixed-
algorithms. These six algorithms are: whale optimization dimension functions F14–F15 and F21–F23, QRHHO
algorithm (WOA) (Mirjalili and Lewis 2016), particle achieves BEST for each function, but other algorithms
swarm optimization (PSO) (Kennedy and Eberhart 2002), cannot get it. For the other functions F16–F20, these
grey wolf optimizer (GWO) (Mirjalili et al. 2014), sine
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
Fig. 3 Comparison of the convergence curves for QRHHO and the other three algorithms obtained in some of the 30-dimensional benchmark
functions
Fig. 4 Comparison of the convergence curves for QRHHO and the other three algorithms obtained in some of the 100-dimensional benchmark
functions
algorithms can obtain the optimal solution because the These results demonstrate the superiority and stability of
function problems relatively are not too difficult. the QRHHO algorithm for solving global optimization
123
Q. Fan et al.
Fig. 5 Comparison of the convergence curves for QRHHO and the other three algorithms obtained in some of the fixed-dimension benchmark
functions
solution and selects the better one from the two solutions.
Table 9 Parameter settings
With successive iterative calculation, QOBL expands the
Algorithm Parameter Value search space continuously, which makes QRHHO con-
PSO Maximum inertia weight (wmax ) 0.09
verging to the satisfied solution faster. As shown in the
Minimum inertia weight (wmin ) 0.04
second competitive experiment, QRHHO also takes the
lead in comparison with the other six well-known algo-
Maximum velocity (vmax ) 1
rithms. The optimization results further demonstrate the
Minimum velocity (vmax ) -1
superiority of the QRBL mechanism.
Cognitive coefficient(c1 ) 2
Nevertheless, it should be pointed out that QRHHO
Cognitive coefficient(c2 ) 2
cannot be regarded as a universal optimization algorithm
GWO Convergence constant (a) [0, 2]
according to the no free lunch theorem. For example, on
MVO Maximum of wormhole existence probability 1
unimodal functions F5 and F6, together with multimodal
Minimum of wormhole existence probability 0.2
functions F12 and F13, the convergence accuracy of
SCA Convergence constant (r1 ) [0, 2]
QRHHO is a little worse than that of the basic HHO.
SSA Coefficient (c1 ) [2/e, 2]
Consequently, QRHHO also has some limitations and
WOA Convergence constant (a) [0, 2]
needs further testing and application.
Coefficient (b) 1
5 Conclusion
problems, and QRHHO is also suitable for solving high- This paper presents a novel quasi-reflected Harris hawks
dimensional optimization problems. optimization (QRHHO) algorithm, which combines the
standard Harris hawks optimization algorithm and the
4.4 Discussion on the performance of QRHHO quasi-reflection-based learning (QRBL) mechanism. The
proposed method mainly applies the QRBL mechanism to
From the first competitive experiment, we observe that the the initial phase, and each population updates phase in the
optimization results of QRHHO are not only better than the optimization process. In order to verify the performance of
basic HHO, but also better than OHHO and QOHHO on QRHHO, twenty-three benchmark functions of various
most benchmark functions. The three variants of HHO are types and dimensions have been evaluated to analyze the
improved by three corresponding OBL mechanisms. exploration capability, the exploitation capability and the
Obviously, the difference of the three OBL mechanisms convergence behavior of the proposed algorithm. The
leads to the performance difference of the three variants. experimental results show that the performance of the
As for the comparison of the three mechanisms, Ewees QRHHO algorithm is better than the basic HHO, two
et al. (2018b) have proved that QRBL was more likely to variants of HHO algorithm and other six swarm-based
obtain accurate solutions than OBL and QOBL. In the intelligent algorithms. The QRBL mechanism helps HHO
optimization process, QRHHO uses the QRBL mechanism to find the optimal solution faster. In the future work, the
to calculate the quasi-reflection solution of the candidate proposed QRHHO is expected to be applied to practical
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
123
Q. Fan et al.
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
123
Q. Fan et al.
Springer, pp 73–85. https://doi.org/10.1007/978-3-319-27400-3_ Karaboga D, Basturk B (2007) A powerful and efficient algorithm for
7 numerical function optimization: artificial bee colony (ABC)
Das S, Bhattacharya A, Chakraborty AK (2016) Quasi-reflected ions algorithm. J Glob Optim 39:459–471. https://doi.org/10.1007/
motion optimization algorithm for short-term hydrothermal s10898-007-9149-x
scheduling. Neural Comput Appl 29:123–149. https://doi.org/ Kennedy J, Eberhart R (2002) Particle swarm Optimization. In:
10.1007/s00521-016-2529-8 Icnn95-international conference on neural networks. https://doi.
Das S, Bhattacharya A, Chakraborty AK (2017) Solution of short- org/10.1109/icnn.1995.488968
term hydrothermal scheduling problem using quasi-reflected Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by
symbiotic organisms search algorithm considering multi-fuel simulated annealing. Science 220:671–680. https://doi.org/10.
cost characteristics of thermal generator. Arab J Sci Eng 1126/science.220.4598.671
43:2931–2960. https://doi.org/10.1007/s13369-017-2973-5 Kurtuluş E, Yildiz A, Sait SM (2020) A novel hybrid Harris hawks-
Dinkar SK, Deep K (2018) An efficient opposition based Lévy Flight simulated annealing algorithm and RBF-based metamodel for
Antlion optimizer for optimization problems. J Comput Sci design optimization of highway guardrails. Mater Test 62:1–15.
29:119–141. https://doi.org/10.1016/j.jocs.2018.10.002 https://doi.org/10.3139/120.111478
Ergezer M, Simon D, Du D (2009) Oppositional biogeography-based Mirjalili S (2015a) The ant lion optimizer. Adv Eng Softw 83:80–98.
optimization. In: 2009 IEEE international conference on sys- https://doi.org/10.1016/j.advengsoft.2015.01.010
tems, man and cybernetics, 2009. IEEE, pp 1009–1014. https:// Mirjalili S (2015b) Moth-flame optimization algorithm: a novel
doi.org/10.1109/ICSMC.2009.5346043 nature-inspired heuristic paradigm. Knowl-Based Syst
Ewees AA, Abd Elaziz M, Houssein EH (2018a) Improved 89:228–249. https://doi.org/10.1016/j.knosys.2015.07.006
grasshopper optimization algorithm using opposition-based Mirjalili S (2016) SCA: a sine cosine algorithm for solving
learning. Expert Syst Appl 112:156–172. https://doi.org/10. optimization problems. Knowl-Based Syst 96:120–133. https://
1016/j.eswa.2018.06.023 doi.org/10.1016/j.knosys.2015.12.022
Ewees AA, Elaziz MA, Houssein EH (2018b) Improved grasshopper Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv
optimization algorithm using opposition-based learning. Expert Eng Softw 95:51–67. https://doi.org/10.1016/j.advengsoft.2016.
Syst Appl 112:S0957417418303701. https://doi.org/10.1016/j. 01.008
eswa.2018.06.023 Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer
Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired advances in engineering software. Renew Sustain Energy Rev
optimization algorithm. Commun Nonlinear Sci 17:4831–4845. 69:46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007
https://doi.org/10.1016/j.cnsns.2012.05.010 Mirjalili S, Mirjalili SM, Hatamlou A (2015) Multi-Verse optimizer:
Guha D, Roy PK, Banerjee S (2016) Load frequency control of large a nature-inspired algorithm for global optimization. Neural
scale power system using quasi-oppositional grey wolf opti- Comput Appl 27:495–513. https://doi.org/10.1007/s00521-015-
mization algorithm. Eng Sci Technol Int J 19:1693–1713. https:// 1870-7
doi.org/10.1016/j.jestch.2016.07.004 Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili
Hashim FA, Houssein EH, Mabrouk MS, Al-Atabany W, Mirjalili S SM (2017) Salp swarm algorithm: a bio-inspired optimizer for
(2019) Henry gas solubility optimization: a novel physics-based engineering design problems. Adv Eng Softw 114:163–191.
algorithm. Future Gener Comput Syst 101:646–667. https://doi. https://doi.org/10.1016/j.advengsoft.2017.07.002
org/10.1016/j.future.2019.07.015 Pan WT (2012) A new fruit fly optimization algorithm: taking the
Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) financial distress model as an example. Knowl-Based Syst
Harris hawks optimization: algorithm and applications. Future 26:69–74. https://doi.org/10.1016/j.knosys.2011.07.001
Gener Comput Syst 97:849–872. https://doi.org/10.1016/j.future. Qais MH, Hasanien HM, Alghuwainem S (2020) Parameters extrac-
2019.02.028 tion of three-diode photovoltaic model using computation and
Holland JH (1992) Genetic algorithms. Sci Am 267:66–73. https:// Harris Hawks optimization. Energy 195:117040. https://doi.org/
doi.org/10.1038/scientificamerican0792-66 10.1016/j.energy.2020.117040
Houssein EH, Hosney ME, Oliva D, Mohamed WM, Hassaballah M Rahnamayan S, Tizhoosh HR, Salama MM (2007) Quasi-oppositional
(2020a) A novel hybrid Harris hawks optimization and support differential evolution. In: 2007 ieee congress on evolutionary
vector machines for drug design and discovery. Comput Chem computation, 2007. IEEE, pp 2229–2236. https://doi.org/10.
Eng 133:106656. https://doi.org/10.1016/j.compchemeng.2019. 1109/cec.2007.4424748
106656 Rahnamayan S, Tizhoosh HR, Salama MM (2008) Opposition-based
Houssein EH, Saad MR, Hussain K, Zhu W, Shaban H, Hassaballah differential evolution. IEEE Trans Evol Comput 12:64–79.
M (2020b) Optimal sink node placement in large scale wireless https://doi.org/10.1109/TEVC.2007.894200
sensor networks based on Harris’ hawk optimization algorithm. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravita-
IEEE Access 8:19381–19397. https://doi.org/10.1109/ACCESS. tional search algorithm. Inf Sci 179(13):2232–2248. https://doi.
2020.2968981 org/10.1016/j.ins.2009.03.004
Hussain K, Zhu W, Salleh MNM (2019) Long-term memory Harris’ Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation
hawk optimization for high dimensional and optimal power flow algorithm: theory and application. Adv Eng Softw 105:30–47.
problems. IEEE Access 7:147596–147616. https://doi.org/10. https://doi.org/10.1016/j.advengsoft.2017.01.004
1109/ACCESS.2019.2946664 Sharma S, Bhattacharjee S, Bhattacharya A (2016) Quasi-opposi-
Jia H, Lang C, Oliva D, Song W, Peng X (2019) Dynamic Harris tional swine influenza model based optimization with quarantine
hawks optimization with mutation mechanism for satellite image for optimal allocation of DG in radial distribution network. Int J
segmentation. Remote Sens 11:1421. https://doi.org/10.3390/ Electr Power Energy Syst 74:348–373. https://doi.org/10.1109/
rs11121421 CEC.2007.4424748
Kamboj VK, Nandi A, Bhadoria A, Sehgal S (2020) An intensify Shehabeldeen TA, Elaziz MA, Elsheikh AH, Zhou J (2019) Modeling
Harris Hawks optimizer for numerical and engineering opti- of friction stir welding process using adaptive neuro-fuzzy
mization problems. Appl Soft Comput 89:106018. https://doi. inference system integrated with Harris hawks optimizer.
org/10.1016/j.asoc.2019.106018 J Mater Res Technol 8:5882–5892. https://doi.org/10.1016/j.
jmrt.2019.09.060
123
A novel quasi-reflected Harris hawks optimization algorithm for global optimization problems
Shiva CK, Mukherjee V (2015) A novel quasi-oppositional harmony dragonfly algorithm for structural design optimization of vehicle
search algorithm for automatic generation control of power components. Mater Test 8:60–70. https://doi.org/10.3139/120.
system. Appl Soft Comput 35:749–765. https://doi.org/10.1016/ 111379
j.asoc.2015.05.054 Yildiz A, Mirjalili S, Sait SM, Li X (2019) The Harris hawks,
Storn R, Price K (1997) Differential evolution–a simple and efficient grasshopper and multi-verse optimization algorithms for the
heuristic for global optimization over continuous spaces. J Glob selection of optimal machining parameters in manufacturing
Optim 11:341–359. https://doi.org/10.1023/A:1008202821328 operations. Mater Test 8:1–15. https://doi.org/10.3139/120.
Tizhoosh HR (2005) Opposition-based learning: a new scheme for 111377
machine intelligence. In: International conference on computa- Yıldız AR, Yıldız BS, Sait SM, Bureerat S, Pholdee N (2019) A new
tional intelligence for modelling, control and automation and hybrid Harris hawks-Nelder-Mead optimization algorithm for
international conference on intelligent agents, web technologies solving design and manufacturing problems. Mater Test
and internet commerce (CIMCA-IAWTIC’06), 2005. IEEE, 61:735–743. https://doi.org/10.3139/120.111378
pp 695–701. https://doi.org/10.1109/cimca.2005.1631345 Yildiz BS, Yildiz AR, Pholdee N, Bureerat S, Sait SM, Patel V (2020)
Too J, Abdullah AR, Mohd Saad N (2019) A new quadratic binary The Henry gas solubility optimization algorithm for optimum
harris hawk optimization for feature selection. Electronics structural design of automobile brake components. Mater Test
8:1130. https://doi.org/10.3390/electronics8101130 62:5–25. https://doi.org/10.3139/120.111479
Truong KH, Nallagownden P, Baharudin Z, Vo DN (2019) A quasi- Yousri D, Allam D, Eteiba MB (2020) Optimal photovoltaic array
oppositional-chaotic symbiotic organisms search algorithm for reconfiguration for alleviating the partial shading influence based
global optimization problems. Appl Soft Comput 77:567–583. on a modified harris hawks optimizer. Energy Convers Manag
https://doi.org/10.1016/j.asoc.2019.01.043 206:112470. https://doi.org/10.1016/j.enconman.2020.112470
Xin Y, Yong L, Lin G (1999) Evolutionary programming made faster.
IEEE Trans Evol Comput 3:82–102. https://doi.org/10.1109/ Publisher’s Note Springer Nature remains neutral with regard to
4235.771163 jurisdictional claims in published maps and institutional affiliations.
Yildiz A, Yildiz BS (2019) The Harris hawks optimization algorithm,
salp swarm algorithm, grasshopper optimization algorithm and
123