Accepted Manuscript: 10.1016/j.ins.2018.02.025
Accepted Manuscript: 10.1016/j.ins.2018.02.025
PII: S0020-0255(18)30107-5
DOI: 10.1016/j.ins.2018.02.025
Reference: INS 13430
Please cite this article as: Depeng Kong , Tianqing Chang , Wenjun Dai , Quandong Wang ,
Haoze Sun , An Improved Artificial Bee Colony Algorithm Based on Elite Group Guidance and Com-
bined Breadth-Depth Search Strategy, Information Sciences (2018), doi: 10.1016/j.ins.2018.02.025
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT
T
Wenjun Dai, [email protected]
IP
Quandong Wang, [email protected]
Haoze Sun, [email protected]
CR
If you have any queries, please don’t hesitate to contact me. Thank you and best regards.
Yours sincerely,
Depeng Kong
TEL: +86 18610675949
US
AN
M
ED
PT
CE
AC
1
ACCEPTED MANUSCRIPT
T
stochastic breadth-first search strategy for employed bees and the stochastic depth-first search strategy for
IP
onlooker bees. Thirdly, the random selection method of elite bees is adopted to replace the probability
selection method of onlooker bees. Besides, the influencing parameters of the optimization results are
CR
studied and the optimum parameters allowing the best comprehensive performance are obtained. In
addition, the proposed algorithm is experimentally verified with 22 benchmark functions and then
compared with other improved artificial bee colony algorithms. The comparison results show that the
US
ECABC can effectively improve the convergence speed, convergence precision, and robustness.
Keywords: Artificial bee colony algorithm, Elite group guidance, Combined search strategy, Search
equation
AN
1. Introduction
Optimization is important in economy, society, national defense and many other fields, but traditional
optimization algorithms cannot solve all the problems. Metaheuristics can detect high-quality solutions
M
based on stochastic optimization technique and get the wide attention from engineers and practitioners.
Metaheuristics involve many algorithms, such as Evolutionary Algorithms (EAs) [33, 38], Memetic
Computing (MC) [6, 7], and Swarm Intelligence (SI) [43]. Genetic algorithm (GA) is a typical evolutionary
ED
algorithm (EA) [10]. The mutation and crossover operations of genetic algorithm can search the entire
solution space, but its search efficiency is low. Differential evolution algorithm (DE) [8, 9, 38, 39, 44] is
considered as an evolutionary algorithm (EA) because of the inheritance of the mutation and crossover
PT
operations. Moreover, it is regarded as a swarm intelligence algorithm (SIA) because of the one-to-one
spawning in the survivor selection [9]. SIA is developed based on the foraging and breeding activities of
natural species group and can be used to solve combinatorial and numerical optimization problems based
CE
ant colony optimization algorithm (ACO) [46], shuffled frog leaping algorithm (SFLA) [42], and invasive
weed optimization algorithm (IWO) [31], to obtain the more satisfactory solution of NP problems. By
simulating the behaviors of honey bees in gathering nectar, in 2005, Turkey scholar Karaboga [23]
proposed the artificial bee colony (ABC) algorithm, which had become one of the most effective swarm
intelligence optimization algorithms for solving continuous optimization problems. The ABC algorithm has
many advantages, such as the good optimization effect, less control parameters, and simple implementation
[40]. Compared with other algorithms, such as PSO, GA, and ACO, ABC algorithm has the superior
exploration ability and high convergence precision [26]. ABC has been widely concerned and applied in
many fields, such as power flow optimization [4], distribution network allocation optimization [1], dynamic
clustering [35], shortest path problem (SP) [13], and service selection and combination [45].
2
ACCEPTED MANUSCRIPT
Although the ABC algorithm has been widely used, the algorithm shows the contradiction between the
exploration and exploitation, the slow convergence, and the weakness of local refinement. To date, a series
of improved ABC algorithms have been proposed to realize the better performance. Inspired by the PSO
algorithm, Zhu and Kwong [47] proposed GABC algorithm, in which global optimal value Gbest was
introduced as the guidance in the neighborhood search equation to improve the convergence speed. Inspired
by differential evolutionary algorithm, Gao et al. [15] proposed an improved ABC algorithm with a new
search algorithm equation “ABC/best/1” and “ABC/rand/1” to improve the global search ability. Gao et al.
[18] proposed MABC algorithm based on another neighborhood search algorithm to search a new solution
in the neighborhood of the optimal solution. Anan Banharnsakun et al. [5] proposed the best-so-far ABC
algorithm by improving the search algorithm of onlooker bees in ABC algorithm based on the consideration
T
that most feasible solutions should be close to the current optimal value. By selecting two neighbors in
IP
improved neighborhood search equation, Gao et al. put forward CABC algorithm [19]. The orthogonal
experimental design and orthogonal learning were introduced into the ABC algorithm to improve its search
CR
capability [4]. Karaboga et al. [25] proposed an improved ABC algorithm qABC with the improved
neighborhood search equation for onlooker bees and accelerated the convergence process of ABC. For the
continuous optimization problem, Mustafa et al. [27] proposed ABCVSS algorithm to improve the search
efficiency and adopted different search strategies to update the solutions. With the multi-information search
US
equation and differential evolutions, Gao et al. proposed the enhanced ABC algorithm [17, 20]. Avadh
Kisho et al. [28] proposed NSABC algorithm for non-dominated sorting and Li et al. [29, 30] put forward
ABCM algorithm based on memory and GRABC algorithm based on gene recombination. Shams et al. [34]
AN
proposed the adaptive multi-swarm Multi-pop-ABC algorithm. Shi et al. [41] studied the new chaotic
operator and neighbor selection strategy and proposed the NSABC algorithm. Cui et al. [11] introduced the
concept of elite individuals into the ABC algorithm and proposed a depth-first search framework and the
M
elite-guided algorithm, DFSABC_elite algorithm and Huo et al. [22] proposed an elite-guided
multi-objective ABC algorithm. In addition, there are some improved ABC algorithms with the improved
scout bee [3], adaptive parameter and adaptive neighborhood [14], information learning [16], and genetic
ED
operator [36]. These improvements in the ABC algorithm enhance the optimization search performance.
However, these optimization algorithms with the higher performance are required in engineering
applications.
PT
In this paper, we proposed an improved ABC algorithm by improving the convergence precision and
convergence speed. After introducing the concept of the elite group, the bee colony starts to search from the
CE
center of elite group to improve the optimization accuracy. The combined search strategy is used as follows:
the stochastic breadth-first search (SBFS) for employed bees and the stochastic depth-first search (SDFS)
for onlooker bees. The combined search strategy can not only ensure the diversity of the population, but
AC
also allow the fast convergence. The proposed ABC algorithm can be used in parameter optimization,
clustering, control, logistics, and so on.
The rest of the paper is organized as follows. Section 2 mainly introduces the basic ABC algorithm and
some improved algorithms. Section 3 details the ECABC algorithm proposed in this paper. Section 4
provides the tests for suitable parameters of the proposed algorithm. Section 5 provides the comparison of
the experimental data of each algorithm. Finally, conclusions are drawn in Section 6.
three types of bees: employed bees, onlooker bees, and scout bees [23]. Different types of bees are
responsible for different tasks. Employed bees are responsible for searching a better food source in the
neighborhood and gathering honey, then informing onlooker bees of the quality of food source by “waggle
dance”. Onlooker bees select food sources according to the quality information and then search for a better
food source with local search. When a food source is exhausted, employed bees or onlooker bees in this
food source will abandon it and then search for a new food source randomly. In the ABC algorithm,
populations of employed bees and onlooker bees are equal to SN, and each food source represents a
solution of the optimization problem. The quantity of food source is the assessment value of the solution.
T
The original artificial bee colony algorithm is introduced in five parts: initialization, employed bees,
selection probability, onlooker bees, and scout bees.
IP
2.1.1. Initialization
The initial population is generated randomly and consists of SN solutions with D-dimensional variables:
CR
X i xi1 , xi 2 , , xiD , i 1, 2, , SN . Set the upper bound of the solution UB UB1 , UB2 , ,UBD , and the
lower bound of the solution LB LB1 , LB2 , , LBD . The initial solution is generated randomly:
US
where i 1, 2, , SN , d 1, 2, , D ; rand is a random real number in the interval [0,1]; xid is the dth
component of the solution xi.
(1)
AN
2.1.2. Employed bees
Each employed bee chooses a neighbor randomly and then generates a new solution by neighborhood
search:
M
where i 1, 2, , SN ; d 1, 2, , D ; vid is the dth dimension of the new solution vi; id is a random real
ED
1/ 1 fi if fi 0
fiti , (4)
1 fi
otherwise
where fi is the assessment value of solution xi and determined by the optimization problem.
2.1.4. Onlooker bees
Onlooker bees choose a solution according to the probability pi and then generate a new solution
according to Eq. (5). If the new solution vi is better than xi, then vi is used to replace xi; otherwise, keep xi
fixed and add 1 to the repeated value trial(i).
Some representative improved ABC algorithms such as GABC, qABC, CABC, MABC, DFSABC_elite
and GRABC are introduced below. These algorithms improve the performance of ABC in some respects
and provide the basis for developing a new improved algorithm.
T
2.2.1. GABC algorithm
IP
The original ABC algorithm is good at exploration but poor at exploitation. Inspired by the particle
swarm algorithm, Zhu and Kwong [47] proposed the global optimum-guided algorithm GABC.
Neighborhood search equation is improved by increasing a global optimum-guided part.
CR
vij xij ij xij xkj ij gbest j xij , (6)
where ij is a uniformly distributed random real number in the range of [-1, 1]; ij is a random real
,i xNm ,i m ,i xNm ,i xk ,i ,
best best best
vNm (7)
M
best
where xNm is the optimal individual among the xm (m =1,2, …, SN) and its neighbors; m ,i is a random
real number in the range of [-1, 1]; xk is a neighbor selected randomly, k 1, 2, , N .
ED
To determine the neighbors of the xm (m=1,2, …, SN), the average distance mdm between xm and xj is
firstly calculated as:
SN
d m, j
PT
j 1
md m , (8)
SN 1
where d(m, j) represents the Euclidean distance between xm and xj.
CE
Then, the neighbors of xm are selected. If Eq. (9) is satisfied, then xj is the neighbor of xm.
d (m, j ) r md m , (9)
AC
vi , j xr1 , j i , j xr1 , j xr2 , j , (10)
where xr 1
and xr2 are two neighbors selected randomly, r1 , r2 1, 2, , SN ; i , j is a uniformly
5
ACCEPTED MANUSCRIPT
vi , j xbest , j i , j xr1 , j xr2 , j , (11)
T
where xbest is the global optimum; i , j is a uniformly distributed random real number in the range of [-1,
1]; xr and xr are two neighbors selected randomly, r1 , r2 1, 2, , SN .
IP
1 2
CR
Aiming at the problems of the slow search speed of the optimal value in neighborhood search process for
employed bees, the DFSABC_elite algorithm [11] adopted a strategy to accelerate the convergence by
improving the neighborhood search formula with elite individual guidance:
vej
1
xej xbest j ej xbest j xkj ,
M
(13)
2
where xbest is the global optimum; xe is an elite individual; ej is a random real number in the range of [-1,
1].
ED
The DFSABC_elite algorithm also proposes a depth-first search framework (DFS). The key feature of
the DFS framework is to allocate more computing resources to the food sources with the better quality.
2.2.6. GRABC algorithm
PT
In GRABC algorithm [29], the gene recombination operator is proposed to enhance the exploitation
ability of ABC and accelerate the convergence process without decreasing the diversity. To apply the gene
recombination operator, the farthest Xm from Xi is firstly obtained with Eqs. (14) and (15) and Xm is the
CE
neighbor of Xi:
d X i , X k min X i , j X k , j ,
1 j D
(14)
AC
6
ACCEPTED MANUSCRIPT
X m, j if rand Pj
XRi , j . (17)
X i, j otherwise
3. Improved algorithm
In this section, a novel ABC algorithm named ECABC is illustrated in detail. The elite group guidance
strategy is introduced to improve the neighborhood search equation and the combined breadth-depth search
strategy is introduced to accelerate the convergence without decreasing the diversity. A new food source
selection method for onlooker bees is introduced to simplify the selection way and enhance the search
ability.
T
3.1. A novel neighborhood search equation based on elite group guidance
IP
The ideal solution is more likely to be distributed around the elite group. The foraging area where the
elite group gathers generally has the good food source, so it is one of the best choices for honey bees.
CR
Although the global optimum can get a better solution quickly, it is prone to fall into the local extremum.
Therefore, according to the behaviors of the elite group, honey bees can improve the solution quality fast
and avoid falling into the local extremum prematurely.
US
There are obvious differences between the traditional neighborhood search and elite group guidance. As
illustrated in Fig. 1, the circles represent the honey bees and the pentagram represents the ideal solution.
The bees will get a better solution when they are close to the pentagram. The traditional neighborhood
AN
search method is illustrated in Fig. 1(a). Bees ①, ②, and ③ randomly select neighbors rand_k1, rand_k2,
and rand_k3, respectively, and generate the new solutions ①, ②, and ③ indicated by dotted circles,
which are better than previous solutions. With the same method, the global optimum Gbest generates a new
M
solution NewGbest, but it gets no improvement in the solution quality. The neighborhood search method
based on elite group guidance is illustrated in Fig. 1(b). Bees ①, ②, and ③ choose the center of the elite
group as the starting point for the search, which is indicated by the diamond. The bees randomly select
ED
neighbors rand_k1, rand_k2, and rand_k3, respectively, and then generate new solutions by searching both
the Gbest and the randomly selected neighbors, thus obtaining a better solution quickly and keeping the
population diversity. The new solutions ①, ②, and ③ indicated by dotted circles shown in Fig. 1(b) are
PT
getting close to the ideal solution compared with Fig. 1(a). The Gbest also moves towards the ideal solution
closer. Therefore, the neighborhood search method based on elite group guidance can improve the search
efficiency, ensure the diversity of population, and avoid falling into the local extremum.
CE
4 4
2 1 2 1
AC
1
Elite Group
The Center of
Elite Group
i 2 New Gbest i
3 1 3
3
New Gbest 2
Ideal Solution Gbest Gbest
Rand_k1
3 Ideal Solution
Rand_k1
Rand_k3 Rand_k3
Rand_k2 Rand_k2
SN SN
Fig. 1. Search method comparison between traditional neighborhood search and the elite group guidance
We introduce a new neighborhood search equation based on elite group guidance. Firstly, the fitness of
each Xi (i=1, 2, ..., SN) is calculated and Xi is sorted according to the fitness. Then the top T=ceil(pSN)
honey bees are selected to construct the elite group XEi (i=1, 2, …, T), where p is the proportion of elite
group in the population and XEC is the center of the elite group and calculated as
1 T
XECd XEd , d 1, 2,
T i 1
,D. (18)
To accelerate the convergence and improve the search efficiency, a novel neighborhood method based on
elite group guidance is proposed as:
T
where XEC is the center of elite group; id is a uniformly distributed random real number in the range of
IP
[-1, 1]; Gbest is the global optimum.
CR
3.2. Combined breadth-depth search strategy
In this subsection, the combined breadth-depth search strategy is introduced to accelerate the
convergence without decreasing the diversity. The breadth-first search strategy is adopted by employed
In the foraging process of honey bees, some bees may forage in a nectar for a long time, whereas other
AN
bees may search a new nectar frequently. As shown in Fig. 2, the circles represent the honey bees and the
pentagram represents the ideal solution. Red arrow represents the valid search that the new solution
generated is better than the origin one, whereas the deep blue arrow represents the invalid search that the
M
new solution generated is worse than the origin one. The bees are selected randomly with the same
probability to search the neighborhood for a better solution. The search is performed in a random and
non-directional way so that the population diversity is guaranteed. For example, in an iteration loop, the bee
ED
① is selected randomly for two times and its first search is an invalid search. The bee ② improves its
solution through neighborhood search, although it is only selected for one time. The bee ③ is not selected
and its solution is unchanged.
PT
1
4
2 1
CE
i 2
3
AC
Ideal Solution
3
SN Valid search
Invalid search
the new solution is worse than the previous solution. On the contrary, if the new solution is worse than the
previous one, then a new honey bee will be selected randomly for the next neighborhood search. As
illustrated in Fig. 3, for example, the bee ① randomly selects a new solution for the first time and then the
search continues because the new solution is better than original one. However, a worse solution is obtained
in the third time. Then this search stops and a new search starts. The bee ② is selected and its second
neighborhood search is invalid, so it stops after its second search. The bee ⑤ is not selected and its
solution is unchanged. Therefore, with the stochastic depth-first search, the algorithm can realize the fast
convergence.
T
2 1
IP
i 2
1
3
CR
1
2
Ideal Solution 3
5 3
SN US Valid search
Invalid search
AN
Fig. 3. Stochastic depth-first search
To choose a solution for onlooker bees, in the original ABC, the fitness value is calculated at first and
then the probability of the selection is calculated. Onlooker bees select the solution supplied by employed
bees based on the selection probability. However, in the later stage of the algorithm, the difference in the
ED
fitness value is getting smaller and the selection probability of onlooker bees is almost the same. The same
selection probability will lose its original role and lead to the slow convergence of the algorithm. To
overcome this shortcoming, a new selection method of onlooker bees is proposed. The features of the
PT
ii. Onlooker bees only select the solution from the elite group randomly.
iii. After the solution is selected, onlooker bees generate a new solution with neighborhood search
equation Eq. (19).
AC
The above detailed introduction of elite group guidance, combined breadth-depth search strategy and the
new selection method of onlooker bees give the main features of the improved ABC algorithm. To illustrate
the steps of the ECABC, the pseudo-code is provided in Fig. 4.
Algorithm 1: ECABC
01: Generate SN solutions with D-dimensional variables according to Eq. (1);
02: While FES<MaxFES
03: Calculate the center of elite group using Eq. (18);
04: for i=1:SN //employed bees phase
9
ACCEPTED MANUSCRIPT
05: Select a solution xt from population randomly; //stochastic breadth-first search strategy
06: Generate a new solution vt with Eq. (19); //the novel neighborhood search equation
07: Evaluate the new solution f(vt);
08: if f(vt)< f(xt)
09: Set xt = vt, f(xt) =f(vt), trial(t)=0;
10: else
11: trial(t)= trial(t)+1;
12: end if
13: end for // end employed bees phase
14: FES=FES+SN;
T
15: Update the center of elite group using Eq. (18);
IP
16: flag=1; //stochastic deep-first search strategy
17: for i=1:SN //onlooker bees phase
CR
18: if flag=1
19: Select a solution xt from elite group randomly; //the random selection of elite
20: end if
21: Generate a new solution vt with Eq. (19); //the novel neighborhood search equation
22:
23:
24:
Evaluate the new solution f(vt);
if f(vt)< f(xt)
US
Set xt = vt, f(xt) =f(vt), trial(t)=0, flag=0;
AN
25: else
26: trial(t)= trial(t)+1, flag=1;
27: end if
M
Compared with the original ABC, ECABC introduces additional computation for the purposes of
selecting the elite group and calculating the elite group center. The computational complexity of selecting
the elite group is O(SNlog(SN)) and the computational complexity of the elite group center is O(SN). The
computational complexity of other parts of ECABC algorithm is denoted as O(SND). O(SN) is much
smaller than the computational complexity of the other two parts, so the complexity of the ECABC can be
expressed as O(SND+SNlog(SN)). The computational complexity of original ABC mainly involves two
aspects: population update and fitness calculation and is denoted as O(SND) [16]. It can be proved that
O(log(SN)) O(D). Therefore, the computational complexity of the ECABC algorithm can be expressed as
10
ACCEPTED MANUSCRIPT
O(SND). Compared with the computational complexity of original ABC, ILABC or DFSABC_elite [11,
16], the computational complexity of ECABC does not increase significantly.
The population diversity is an important factor to evaluate the optimization performance of the algorithm
and indicates its convergence efficiency and the ability to jump out of the local extremum. Population
diversity is defined as follows:
xi , j x j ,
SN
1 1 D
Diversity
SN
i 1 D j 1
(20)
SN
1
where x is the average of all individuals; x j = x .
T
i, j
SN i 1
The population diversity should quickly decrease if there is only one extremum such as the solution of
IP
continuous unimodal functions. Therefore, the algorithm can provide the optimal solution quickly within
limited iterations by the concentrated population. However, if there are multiple local minima, such as
CR
multimodal functions, the population should maintain a certain diversity in the iteration process to prevent
the algorithm from falling into local extremum prematurely.
The 22 benchmark functions [16] in Table 1 are selected to test the proposed ECABC algorithm. The
variable dimensions are set as D=30, 50, and 100. f1-f6 and f8 are continuous unimodal functions; f7 is a
ED
discontinuous step function; f11-f22 is a continuous multimodal function; f10 is a unimodal function when D
3 and a multimodal function when D>3. The optimal and acceptable values are given in Table 5 and the
acceptable values represent the satisfactory solution of the function.
PT
D
Sphere f1 x xi2 [-100, 100]D 0 1×10-8
i 1
i 1
f 2 x 106 D 1 xi2
D
Elliptic [-100, 100]D 0 1×10-8
AC
i 1
D
SumSquare f 3 x ixi2 [-10, 10]D 0 1×10-8
i 1
D
f 4 x xi
i 1
SumPower [-1, 1]D 0 1×10-8
i 1
D D
Schwefel 2.22 f5 x xi xi [-10, 10]D 0 1×10-8
i 1 i 1
D
Exponential f8 x exp 0.5 xi [-10, 10]D 0 1×10-8
i 1
11
ACCEPTED MANUSCRIPT
D
Quartic f 9 x ixi4 random 0,1 [-1.28, 1.28]D 0 1×10-1
i 1
D 1
f10 x 100 xi 1 xi2 xi 1
2
RosenBrock [-5, 10]D 0 1×10-1
i 1
D
Rastrigin f11 x xi2 10cos 2 xi 10 [-5.12, 5.12]D 0 1×10-8
i 1
D
xi 1 / 2
f12 x yi2 10cos 2 yi 10 yi
xi
NcRastrigin [-5.12, 5.12]D 0 1×10-8
i 1 round 2 xi / 2 xi 1 / 2
D D
x
Griewank f13 x 1/4000 xi2 cos i 1 [-600, 600]D 0 1×10-8
i 1 i 1 i
x
D
Schwefel 2.26 f14 x 418.98288727243380 D xi sin i [-500, 500]D 0 1×10-8
T
i 1
1 D 2 1 D
f15 x 20 e 20exp 0.2 xi exp cos 2 xi [-50, 50]D 1×10-8
IP
Ackley 0
D i 1 D i 1
D 1
f16 x 10sin 2 y1 yi 1 1 sin 2 yi 1
2
D
CR
i 1
u x ,10,100,4
D
y D 1
2
i
i 1
Penalized 1 [-100, 100]D 0 1×10-8
k xi a m xi a
yi 1 1 / 4 xi 1 , uxi ,a ,k ,m 0 a xi a
Penalized 2
f17 x
1 2
10
D 1
i 1
2
k xi a xi a
2
m
i 1
D
Alpine f18 x xi sin xi 0.1 xi [-10, 10]D 0 1×10-8
i 1
D1
M
kmax kmax
f 20 x a k cos 2 bk xi 0.5 D a k cos 2 bk 0.5
D
a 0.5, b 3, kmax 20
xi 16xi2 5xi
1 D 4
Himmelblau f 21 x [-5, 5]D -78.33236 -78
D i 1
PT
D
i xi2
Michalewicz f 22 x sin xi sin 20 [0, ]D -29.96 -29,-48,-95
i 1
To ensure the reasonable algorithm comparison, according to the commonly used parameters of ABC
CE
algorithm, the population is set as SN=50; the terminal condition FES=5000D, where FES denotes the
number of executing function f(x) evaluations; the maximum repeated value limit=DSN.
To obtain the better performance, we select five functions including typical unimodal and multimodal
functions: f1, f4, f10, f15, and f18, to test the influences of the proportion of elite groups p and the number of
dimension mutation Dim on the calculation results. The convergence properties on the selected functions
are respectively verified with p=0.05, 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, 0.4, and Dim=1, 2, 3, 4, 5, 6, 8, 10. A
total of 64 combinations of p and Dim are set. Variable dimensions D=30, 50, and 100 are respectively used
to test benchmark functions.
The test results of parameters p and Dim are presented by 3D graph, as shown in Fig. 5. The X coordinate
is the parameter Dim; the Y coordinate is the parameter p; the Z coordinate is the optimum value of the test
function. In order to ensure the stability of the algorithm, the test function under each condition is executed
25 times independently and the mean optimum value is used as the result of the algorithm. To make the
12
ACCEPTED MANUSCRIPT
results more intuitive, the best values are expressed in the logarithmic form.
As shown in Fig. 5, the parameters p and the Dim have a great influence on the optimization results of
the algorithm. Different combinations of the parameters have different results. For the unimodal function f1,
when the p and Dim values are larger, the optimization ability is greatly reduced. For the unimodal function
f4, p and Dim can be selected within only a small range for a better result. For the multimodal functions f10
and f15, the parameter Dim largely determines the optimization result and the larger the Dim value is, the
worse the optimization result is. However, for the multimodal function f18, the parameter p largely
determines the optimization result and the larger the p value is, the worse the optimization result is.
Therefore, in order to take into account the optimization ability of each test function and reflect the
comprehensive performance of the proposed algorithm, in this paper, the parameters are set as: p=0.1,
T
Dim=2.
IP
CR
f1, D=30
US
f1, D =50 f1, D =100
AN
M
ED
13
ACCEPTED MANUSCRIPT
T
5. Experimental results and analysis
IP
In this section, we firstly test the effectiveness of each improvement component and then compare
ECABC with other algorithms for a comprehensive test with the 22 benchmark functions.
CR
5.1. Effectiveness of improvement components of ECABC
This paper presents three components of artificial bee colony algorithm: a novel neighborhood search
US
equation based on elite group guidance, a combined search strategy, and a random selection method of elite
for onlooker bees. This subsection tests the effectiveness of each improvement component in the
optimization results.
AN
i. ABC is only combined with the novel neighborhood search equation and denoted by ABC-e.
ii. ABC is only combined with the combined search strategy and denoted by ABC-c.
iii. ABC is only combined with the random selection method and denoted by ABC-r.
M
iv. ABC is combined with the combined search strategy and the elite random selection method and
denoted by ABC-cr.
v. ABC is combined with the novel neighborhood search equation and the elite random selection
ED
selected to test the effectiveness of improvement components. The test results are shown in Fig. 6.
CE
AC
f1 f7 f10
14
ACCEPTED MANUSCRIPT
Fig. 6. Convergence performance comparison between ABC and each improvement component
T
As shown in Fig. 6, the new neighborhood search equation can effectively improve the convergence
IP
speed, but in the later period it is easier to fall into local extreme, such as the test functions f1. The
combined search strategy can ensure the diversity of the swarm algorithm, but the algorithm converges
CR
slowly. The elite random selection strategy of onlooker bees can improve the convergence speed in the later
search stage of the algorithm. The combination of the comprehensive search strategy and the elite random
selection strategy can ensure the diversity of the algorithm, but it also decreases the convergence speed.
The combination of the elite group guidance-based search equation and the elite random selection strategy
US
can effectively improve the comprehensive performance of the algorithm, but the algorithm is easy to fall
into local extremum. The combination of the three improved methods can effectively improve the
convergence speed of the algorithm, maintain the diversity of the population, avoid falling into local
AN
minima, and provide the best results.
In this experiment, in order to validate the efficiencies of ECABC, ABC [23], GABC [47], CABC [19],
MABC [18], qABC [25], GRABC [29], and DFSABC_elite [11], these algorithms are compared with 22
benchmark functions under the variable dimension D=30. The parameters of these algorithms come from
ED
corresponding reports. In order to illustrate the effectiveness of the proposed algorithm, the parameters with
the optimal comprehensive performance are selected for the comparison: p=0.1, Dim=2. The parameters of
these algorithms are provided in Table 2.
PT
15
ACCEPTED MANUSCRIPT
f1 f2 f3
T
IP
CR
f4
US
f5 f6
AN
M
ED
f7 f8 f9
PT
CE
AC
16
ACCEPTED MANUSCRIPT
T
IP
CR
f19
USf20 f21
AN
M
ED
f22
As shown in Fig. 7, ECABC has the better performance than other algorithms. For the unimodal
functions f1-f6, all the algorithms can get a satisfactory solution and ECABC is faster obviously. For the
CE
multimodal functions f10 and f14, ECABC performance is not excellent as expected, but it can obtain the
satisfactory solutions. For other benchmark functions, ECABC gets the better performances than other
algorithms in the terms of convergence speed and convergence precision. Overall, the improved ABC
AC
algorithm ECABC can effectively improve the performance from the comparison of convergence curves.
5.3. Comparisons of ECABC results and the results of other ABC algorithms
In order to compare ECABC with ABC, GABC, CABC, MABC, qABC, GRABC, and DFSABC_elite
on 22 benchmark functions, the convergence precision of ECABC is tested with D=30, 50, and 100. Each
algorithm runs 25 times independently and the optimal value is recorded. The results are shown in Table 3.
+/-/= respectively indicate that the performance of the corresponding algorithm is better than, worse than,
or equal to that of ECABC according to the Wilcoxon’s rank test at the level of 0.05 [12].
As shown in Table 3, ECABC is significantly better than other algorithms in most of the cases. For the
functions with D=30, ECABC performs significantly better than ABC, GABC, CABC, MABC, qABC,
GRABC and DFSABC_elite on 18, 17, 10, 10, 18, 18 and 9 functions, respectively. In contrast, ECABC is
17
ACCEPTED MANUSCRIPT
beaten only by ABC, GABC, CABC, MABC, qABC, GRABC and DFSABC_elite on 1, 1, 3, 2, 1, 1 and 3
functions, respectively. ECABC is unable to obtain the best results for only five functions: f6, f10, f15, f18
and f22. For the remaining cases with D=30, all the algorithms perform equally. For the cases D=50, it is
clear that ECABC is significantly better than other competitors in the majority of cases. ECABC performs
significantly better than ABC, GABC, CABC, MABC, qABC, GRABC and DFSABC_elite for 18, 15, 19,
7, 9, 18, 17 and 6 functions. ECABC is worse than ABC, GABC, CABC, MABC, qABC, GRABC and
DFSABC_elite for only 1, 1, 4, 3, 1, 2 and 5 functions, respectively. ECABC is unable to obtain the best
results for only five functions: f6, f10, f15, f18, and f22. For the cases D=100, the results on the functions
are similar to the cases D=30 and D=50 and ECABC is also superior to other algorithms. ECABC performs
significantly better than ABC, GABC, CABC, MABC, qABC, GRABC and DFSABC_elite for 17, 16, 8, 8,
T
18, 18 and 7 functions, respectively. ECABC is worse than ABC, GABC, CABC, MABC, qABC, GRABC
IP
and DFSABC_elite for 2, 2, 5, 4, 1, 1 and 7 functions, respectively. ECABC is worse than some
competitors for f6, f10, f14, f15, f18 and f22.
Table 3 Results of Wilcoxon’s test of the compared ABC method on 22 benchmark functions
CR
ECABC D=30 D=50 D=100
vs + - = + - = + - =
US
ABC 1 18 3 1 18 3 2 17 3
GABC 1 17 4 1 15 6 2 16 4
CABC 3 10 9 4 7 11 5 8 9
AN
MABC 2 10 10 3 9 10 4 8 10
qABC 1 18 3 1 18 3 1 18 3
GRABC 1 18 3 2 17 3 1 18 3
DFSABC_elite 3 9 10 5 6 11 5 7 10
M
Moreover, Friedman’s test is also used to evaluate the performance of all the ABC variants. The final
mean ranks of all ABC variants are shown in Table 4 with D =30, 50, 100 according to the Friedman’s test.
The best results are highlighted in bold. Obviously, ECABC obtains the best ranking in the majority of the
ED
test functions and the mean rank of all the functions of ECABC is better than that of other ABC variants.
Table 4 Ranking of all ABCs based on the Friedman test for 22 functions.
PT
Mean ranks ABC GABC CABC MABC qABC GRABC DFSABC_elite ECABC
In order to test the population diversity, the benchmark functions f1, f4, f6, f10, f12, f16, f18, f20, and f22 are
selected with variable dimensions D=30. Population diversity reflects the robustness and the global
convergence rate of the algorithms. The comparison curves of population diversity are shown in Fig. 8.
18
ACCEPTED MANUSCRIPT
f1 f4 f6
T
IP
CR
f10
USf12 f16
AN
M
ED
iterations, but there are also slight variations in some iterations due to scout bees in population. Scout bees
generate the new solution randomly when employed bees or onlooker bees repeatedly search and do not
CE
improve their solution. With the scout bees, the ability to jump out of the local extremum is improved. For
unimodal functions f1, f4, and f6, the population diversity of all algorithms converges rapidly and ECABC is
the fastest. The population moves to the optimal value fast, as shown in Fig. 7. The results indicate that the
AC
ECABC has an excellent global optimization and rapid convergence ability for unimodal functions. For the
multimodal function f10, the population diversity of ECABC algorithm decreases rapidly, thus resulting in a
poor global search ability, which is one of the reasons for the unsatisfactory results in the testing process
compared to other ABC algorithms. For the multimodal test functions f12, f16, f18, and f22, the population
diversity of ECABC decreases with the increase in the number of iterations, but there are also some shocks,
indicating that the ECABC has the better global search ability due to the role of scout bees. In particular,
for the benchmark function f20, the population diversity of some algorithms increases with the increase in
the number of iterations. The increase in the population diversity may be interpreted as follows. The testing
function has many extreme points and the population moves toward different extreme points respectively
with the increase in the number of iterations. In this way, this problem of different extreme points is
19
ACCEPTED MANUSCRIPT
avoided and the convergence speed is enhanced in the ECABC algorithm adopting the following strategy of
the elite group center.
The comparison of time complexity is performed with 22 test functions. The time complexity of the
proposed algorithm has been theoretically analyzed in Subsection 3.5 and further testing is required to
determine the actual running time of the algorithm. Set D=30 and other parameters are the same as those in
Table 2. The computer configuration is provided as: Intel i7-7700hq processor, windows 10 home basic OS,
8G memory, and MATLAB R2016a. The results of the average running time are provided in Table 5.
Table 5. Comparison of running time of various ABC algorithms
T
Algorithms ABC GABC CABC MABC qABC GRABC DFSABC_elite ECABC
Ave-time (s) 1.7627 1.7669 1.7799 1.9962 2.8011 1.9569 1.6467
IP
2.0954
From the time data in Table 5, the method proposed in this paper requires a little longer computation
period than ABC, but the difference compared with other algorithms is not significant. It shows that the
CR
time complexity is acceptable.
6. Conclusions
US
In this study, inspired by the elite group guidance and the combined search strategy, we propose the
improved ABC algorithm, ECABC. The performance of ABC has been improved in the following aspects.
Firstly, a new neighborhood search equation is proposed according to the elite group guidance strategy.
AN
Secondly, the combined breadth-depth search strategy is put forward. Employed bees search the neighbor
with the stochastic breadth-first search strategy and onlooker bees adopt the stochastic depth-first search
strategy. Thirdly, the probability selection method of onlooker bees is replaced by the elite individual
M
random selection method. Besides, the elite group proportion p and dimension mutation Dim are tested to
obtain a suitable parameter. The experimental results obtained with 22 benchmark functions show that
ECABC can effectively improve the performance of ABC in terms of the solution quality and the
ED
convergence speed.
PT
Acknowledgments
The author is very grateful to the Editor-in-Chief and the anonymous referees for their constructive
CE
20
ACCEPTED MANUSCRIPT
References
[1] F.S. Abu-Mouti, M.E. El-Hawary, Optimal Distributed Generation Allocation and Sizing in Distribution
Systems via Artificial Bee Colony Algorithm, IEEE Transactions on Power Delivery, 26 (2011) 2090-2101.
[2] B. Akay, D. Karaboga, A modified Artificial Bee Colony algorithm for real-parameter optimization,
Information Sciences, 192 (2012) 120-142.
[3] S. Anuar, A. Selamat, R. Sallehuddin, A modified scout bee for artificial bee colony algorithm and its
performance on optimization problems, Journal of King Saud University - Computer and Information
T
Sciences, 28 (2016) 395-406.
[4] W. Bai, I. Eke, K.Y. Lee, An improved artificial bee colony optimization algorithm based on orthogonal
IP
learning for optimal power flow problem, Control Engineering Practice, 61 (2017) 163-172.
[5] A. Banharnsakun, T. Achalakul, B. Sirinaovakul, The best-so-far selection in Artificial Bee Colony
CR
algorithm, Applied Soft Computing, 11 (2011) 2888-2901.
[6] F. Caraffini, F. Neri, G. Iacca, A. Mol, Parallel memetic structures, Information Sciences, 227 (2013)
60-82.
[8] J. Cheng, G. Zhang, F. Caraffini, F. Neri, Multicriteria adaptive differential evolution for global
AN
numerical optimization, Integrated Computer-Aided Engineering, 22 (2015) 103-107.
[9] J. Cheng, G. Zhang, F. Neri, Enhancing distributed differential evolution with multicultural migration
for global numerical optimization, Information Sciences, 247 (2013) 72-93.
M
[10] Y.C. Chuang, C.T. Chen, C. Hwang, A real-coded genetic algorithm with a direction-based crossover
operator, Information Sciences, 305 (2015) 320-348.
[11] L. Cui, G. Li, Q. Lin, Z. Du, W. Gao, J. Chen, N. Lu, A novel artificial bee colony algorithm with
ED
depth-first search framework and elite-guided search equation, Information Sciences, 367-368 (2016)
1012-1044.
[12] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical
PT
tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm and
Evolutionary Computation, 1 (2011) 3-18.
[13] A. Ebrahimnejad, M. Tavana, H. Alrezaamiri, A novel artificial bee colony algorithm for shortest path
CE
[15] W. Gao, S. Liu, Improved artificial bee colony algorithm for global optimization, Information
Processing Letters, 111 (2011) 871-882.
[16] W.F. Gao, L.L. Huang, S.Y. Liu, C. Dai, Artificial Bee Colony Algorithm Based on Information
Learning, IEEE Trans Cybern, 45 (2015) 2827-2839.
[17] W.F. Gao, L.L. Huang, J. Wang, S.Y. Liu, C.D. Qin, Enhanced artificial bee colony algorithm through
differential evolution, Applied Soft Computing, 48 (2016) 137-150.
[18] W.F. Gao, S.Y. Liu, A modified artificial bee colony algorithm, Computers & Operations Research, 39
(2012) 687-697.
[19] W.F. Gao, S.Y. Liu, L.L. Huang, A novel artificial bee colony algorithm based on modified search
equation and orthogonal learning, IEEE Trans Cybern, 43 (2013) 1011-1024.
21
ACCEPTED MANUSCRIPT
[20] W.F. Gao, S.Y. Liu, L.L. Huang, Enhancing artificial bee colony algorithm using more
information-based search equations, Information Sciences, 270 (2014) 112-133.
[21] Y.J. Gong, J.J. Li, Y. Zhou, Y. Li, H.S. Chung, Y.H. Shi, J. Zhang, Genetic Learning Particle Swarm
Optimization, IEEE Trans Cybern, 46 (2016) 2277-2290.
[22] Y. Huo, Y. Zhuang, J. Gu, S. Ni, Elite-guided multi-objective artificial bee colony algorithm, Applied
Soft Computing, 32 (2015) 199-210.
[23] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization:
artificial bee colony (ABC) algorithm, Journal of Global Optimization, 39 (2007) 459-471.
[24] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Applied Soft
Computing, 8 (2008) 687-697.
T
[25] D. Karaboga, B. Gorkemli, A quick artificial bee colony (qABC) algorithm and its performance on
IP
optimization problems, Applied Soft Computing, 23 (2014) 227-238.
[26] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony
CR
(ABC) algorithm and applications, Artificial Intelligence Review, 42 (2012) 21-57.
[27] M.S. Kiran, H. Hakli, M. Gunduz, H. Uguz, Artificial bee colony algorithm with variable search
strategy for continuous optimization, Information Sciences, 300 (2015) 140-157.
[28] A. Kishor, P.K. Singh, J. Prakash, NSABC: Non-dominated sorting based multi-objective artificial bee
US
colony algorithm and its application in data clustering, Neurocomputing, 216 (2016) 514-533.
[29] G. Li, L. Cui, X. Fu, Z. Wen, N. Lu, J. Lu, Artificial bee colony algorithm with gene recombination for
numerical function optimization, Applied Soft Computing, 52 (2017) 146-159.
AN
[30] X. Li, G. Yang, Artificial bee colony algorithm with memory, Applied Soft Computing, 41 (2016)
362-372.
[31] H. Maghsoudlou, B. Afshar-Nadjafi, S.T.A. Niaki, A multi-objective invasive weeds optimization
M
algorithm for solving multi-skill multi-mode resource constrained project scheduling problem, Computers
& Chemical Engineering, 88 (2016) 157-169.
[32] F. Neri, E. Mininno, G. Iacca, Compact Particle Swarm Optimization, Information Sciences, 239 (2013)
ED
96-121.
[33] F. Neri, V. Tirronen, Recent advances in differential evolution: a survey and experimental analysis,
Artificial Intelligence Review, 33 (2009) 61-106.
PT
[34] S.K. Nseef, S. Abdullah, A. Turky, G. Kendall, An adaptive multi-population artificial bee colony
algorithm for dynamic optimisation problems, Knowledge-Based Systems, 104 (2016) 14-23.
CE
[35] C. Ozturk, E. Hancer, D. Karaboga, Dynamic clustering with improved binary artificial bee colony
algorithm, Applied Soft Computing, 28 (2015) 69-80.
[36] C. Ozturk, E. Hancer, D. Karaboga, A novel binary artificial bee colony algorithm based on genetic
AC
22
ACCEPTED MANUSCRIPT
T
[46] Q. Yang, W. Chen, Z. Yu, T. Gu, Y. Li, H. Zhang, J. Zhang, Adaptive Multimodal Continuous Ant
IP
Colony Optimization, IEEE Transactions on Evolutionary Computation, 21 (2017) 191-205.
[47] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization,
CR
Applied Mathematics and Computation, 217 (2010) 3166-3173.
US
AN
M
ED
PT
CE
AC
23