Algorithms 14 00029
Algorithms 14 00029
Article
Particle Swarm Optimization Based on a Novel Evaluation
of Diversity
Haohao Zhou and Xiangzhi Wei *
Intelligent Manufacturing and Information Engineering, Shanghai Jiao Tong University, Shanghai 200240, China;
[email protected]
* Correspondence: [email protected]; Tel.: +86-159-0191-7804
Abstract: In this paper, we propose a particle swarm optimization variant based on a novel evaluation
of diversity (PSO-ED). By a novel encoding of the sub-space of the search space and the hash
table technique, the diversity of the swarm can be evaluated efficiently without any information
compression. This paper proposes a notion of exploration degree based on the diversity of the swarm
in the exploration, exploitation, and convergence states to characterize the degree of demand for
the dispersion of the swarm. Further, a disturbance update mode is proposed to help the particles
jump to the promising regions while reducing the cost of function evaluations for poor particles. The
effectiveness of PSO-ED is validated on the CEC2015 test suite by comparison with seven popular
PSO variants out of 12 benchmark functions; PSO-ED achieves six best results for both 10-D and 30-D.
1. Introduction
Inspired by the emergent motion of the foraging behavior of a flock of birds in nature,
the particle swarm optimization (PSO) was first proposed by Kennedy and Eberhart [1,2] in
1995 to solve the continuous nonlinear optimization problems. Compared with other evo-
Citation: Zhou, H.; Wei, X. Particle
lutionary algorithms, the PSO has attracted much attention since it was proposed because
Swarm Optimization Based on a
of its fewer control parameters and better convergence. Nowadays, PSO has been widely
Novel Evaluation of Diversity.
used in many fields, such as communication networks [3,4], medicine engineering [5–7],
Algorithms 2021, 14, 29.
task scheduling [8,9], energy management [10], linguistics studies [11], supply chain man-
https://doi.org/10.3390/a14020029
agement [12] and neural networks [13,14].
Despite its robustness for solving complex optimization problems, rapid convergence
Received: 14 January 2021
Accepted: 18 January 2021
may cause the swarm to be easily trapped into some local optima when solving multimodal
Published: 20 January 2021
problems via PSO [15–22]. Therefore, reasonably using the swarm’s exploration ability
(global investigation of the search place) and exploitation ability (finer search around a
Publisher’s Note: MDPI stays neutral
local optimum) is a crucial factor for PSO’s success, especially for complex multimodal
with regard to jurisdictional claims in
problems having a large number of local minima. For this purpose, several PSO variants
published maps and institutional affil- have been reported. In the following, we shall discuss three main types: PSO with adjusted
iations. parameters, PSO with various topology structures, and PSO with hybrid strategies.
PSO with adjusted parameters: It is evident that appropriate control parameters, such
as the inertia weight ω and the acceleration coefficients c1 and c2 , have significant effects
on the exploration and exploitation abilities of the swarm. By increasing the value of
Copyright: © 2021 by the authors.
ω significantly diversity of the swarm increases while increasing the values of c1 and c2
Licensee MDPI, Basel, Switzerland.
accelerates the particles towards the historical optimal position or the optimal position of
This article is an open access article
the whole swarm. Zhan [15] proposed an adaptive particle swarm optimization (APSO)
distributed under the terms and algorithm in 2009, which updated control parameters (ω, c1 and c2 ) adaptively based
conditions of the Creative Commons on the distribution of positions and fitness of the swarm. Zhang et al. [23] proposed an
Attribution (CC BY) license (https:// inertia weight adjustment scheme based on Bayesian techniques to enhance the swarm’s
creativecommons.org/licenses/by/ exploitation ability. Tanweer et al. [24] proposed a PSO algorithm (SRPSO) that employed a
4.0/). self-regulating inertia weight strategy to the best particle to enhance the exploration ability.
Taherkhani [25] proposed an adaptive approach which determines the inertia weight in
different dimensions for each particle, based on its performance and distance from its
best position.
PSO with various topology structures: PSO with different topology structures, namely:
improved fixed topology structures, dynamic topology structures, and multi-swarms, have
been shown to be efficient in controlling the exploration and exploitation capabilities [26–28].
Kennedy [29] proposed a small-world social network and studied different topologies’
influences on PSO algorithms’ performances. It was found that sparsely connected net-
works were suitable for complex functions, and densely connected networks were useful
for simple functions.
Suganthan [30] first introduced the concept of dynamic topologies for PSO, where sub-
swarms (each was a particle initially) were gradually merged as the evolution progressed.
Cooren et al. [28] proposed an adaptive PSO called TRIBES, which were multiple sub-
swarms with independent topological structures changing over time. Bonyadi et al. [31]
presented dynamic topologies by growing the sub-swarms’ sizes, merging the sub-swarms.
Zhang [32] proposed DEPSO, which generated a weighted search center based on top-k
elite particles to guide the swarm.
Bergh and Engelbrecht [33] divided a d-dimension swarm into k (k < d) sub-swarms
and made sub-swarms cooperated by exchanging their information (e.g., the best particle).
Blackwell and Branke [34] proposed a multi-swarm PSO for dynamic functions with the op-
timal values changing over time. Liang’s group [35,36] presented a dynamic multi-swarm
PSO with small sub-swarms frequently regrouped using various schedules. Xu et al. [37]
hybridized the dynamic multi-swarm PSO with a new cooperative learning strategy in
which the worst two particles learned from the two better sub-swarms. Chen et al. [21]
proposed a dynamic multi-swarm PSO with a differential learning strategy. It combined
the differential mutation into PSO and employed Quasi-Newton method as a local searcher.
PSO with hybrid strategies: To improve the performances of PSO, combining ex-
cellent strategies into PSO has been shown to be an effective approach. For example,
Mirjalili et al. [38] combined PSO with gravitational search algorithm for efficiently training
feedforward neural networks. Zhang et al. [39] combined PSO with a back-propagation
algorithm to efficiently train the weights of feedforward neural networks. Nagra et al. [40]
developed a hybrid of dynamic multi-swarm PSO with a gravitational search algorithm
for improving the performance of PSO. Zhan et al. [41] combined PSO with orthogonal
experimental design to discover an excellent exemplar from which the swarm can quickly
learn and speed up the searching process. Bonyadi et al. [31] hybridized PSO with a
covariance matrix adaptation strategy to improve the solutions in the latter phases of the
searching process. Garg [42] combined PSO with genetic algorithms (GA): creating a new
population by replacing weak particles with excellent ones via selection, crossover, and
mutation operators. Plevris and Papadrakakis [43] combined PSO with a gradient-based
quasi-Newton SQP algorithm for optimizing engineering structures. N. Singh and S.B.
Singh [44] combined PSO with grey wolf optimizer for improving the convergence rate
of the iterations. Raju et al. [45] combined PSO with a bacterial foraging optimization
for 3D printing parameters of complicated models. Visalakshi and Sivanandam [46] com-
bined PSO and the simulated annealing algorithm for processing dynamic task scheduling.
Kang [47] introduced opposition-based learning (OBL) into PSO to improve the swarm’s
performance in noisy environments. Cao et al. [48] embedded the comprehensive learning
particle swarm optimizer (CLPSO) with local search (CL) to take advantage of both the
exploration ability of CLPSO and the exploitation ability of CL.
Many scholars have conducted intensive research on the theory and applications of the
PSO algorithms. However, there are still some shortcomings. For example, the parameter
adaptive adjustment strategy cannot truly reflect the evolution of the population, and the
particle swarm cannot effectively jump out of local optimal areas. To address these issues,
a particle swarm optimization variant based on a novel evaluation of diversity (PSO-ED) is
proposed in this paper and the major innovation is listed as follows:
Algorithms 2021, 14, 29 3 of 16
2. Related Work
2.1. Standard PSO
PSO is a population-based stochastic optimization algorithm introduced in 1995 by
Kennedy and Eberhart without inertia weight [1,2]. Since the introduction of inertia weight
was introduced by Shi and Eberhart [49] in 1998, it has shown its power in controlling the
exploration and exploitation processes of evolution (Equations (1) and (2)).
vid (t + 1) = ωvid (t) + c1 r1 Pid (t) − xid (t) + c2 r2 Pgd (t) − xid (t) (1)
the t-th iteration. Pi is the historically best position of particle i, and Pg is the position
of the globally best particle. Acceleration coefficients c1 and c2 are commonly set in the
range [0.5, 2.5] ([50,51]). r1 and r2 are two randomly generated values within range [0, 1],
ω typically decreases linearly from 0.9 to 0.4 [49].
the number Zq of particles in each subspace; (3) calculating the probability in the subspace
by Equation (3); (4) evaluating the diversity (the information entropy) by Equation (4).
Zq
pq = (3)
Ns
Q
E=− ∑ pq logn pq (4)
q =1
where Ns represents the total number of particles, q represents the q-th subspace, and E
represents the information entropy and is taken as the value of diversity.
3. PSO-ED
This section presents the technical details of PSO-ED. Section 3.1 introduces the main
idea of PSO-ED; Section 3.2 presents our evaluation strategy of computing the exploration
degree; Section 3.3 presents an adaptive update of inertia weight based on the exploration
degree; Section 3.4 presents the swarm reinitialization mechanism that helps the swarm
escape local traps; Section 3.5 presents two update modes, i.e., normal update mode
and disturbance update mode, for saving function evaluations (FEs) on poor-performed
particles.
Symbol Quantity
t The index of the current iteration
xi The position of particle i, and xi ∈ [ xlb , xub ] where xlb and xub are lower and upper bounds of the position
vi The velocity of particle i, and vi ∈ [−vub , vub ] where vub is the upper bound of the velocity
ω, c1 , c2 , c3 The inertial weight and accelerations for PSO respectively, and c3 is defined in Section 3.4
Pi , Pg , Pt The position of Pbest, Gbest and Tbest, where Tbest denotes the best position over all historical rounds of evolutions
Ns The size of the swarm
D The dimension of the problem space
Ntotal The total number of iterations
The evolution state of the swarm, and S e {er , ei , ec } where er , ei , ec represents the Exploration state, the
S
Exploitation state, and the Convergence state, respectively.
The number of generations of state maintenance, and Nstate ∈ [ Nsmin , Nsmax ] where Nsmin and Nsmax are the
Nstate
minimum and maximum generations of state maintenance
Er The average evolution rate
The number of undeveloped generations, and Nud < Numax where Nuamx is the Maximum consecutive undeveloped
Nud
generations
Algorithms 2021, 14, 29 5 of 16
Table 1. Cont.
Symbol Quantity
ci The ID of subspace where the i-th particle is located and will be discussed in Section 3.2.1
E The information entropy of swarm position
The exploration degree of the swarm, and Ed ∈ [ Edlb , Edub ] where Edlb and Edub are the lower and upper bounds of
Ed
the Ed
Nw the number of weak particles that require the disturbance update mode
Algorithms 2021,The
14, x FOR PEER
update modeREVIEW 5 of 18
of particle i, and mi ∈ {mnor , mdis } where mnor and mdis are the Normal and Disturbance update
mi
modes
Figure 1. Flowchart of the particle swarm optimization variant based on a novel evaluation of di-
Figure 1. Flowchart of the particle swarm optimization variant based on a novel evaluation of
versity (PSO-ED).
diversity (PSO-ED).
Table
3.2. 1. Nomenclature.
Evaluation of the Exploration Degree
3.2.1. A
Symbol Novel Diversity Evaluation Scheme
Quantity
𝑡 As mentioned The above,
index the distance-based
of the current iteration diversity is evaluated by computing the
distances between The position of particle i, and 𝑥 ∈scheme
particles. The essence of this [𝑥 , 𝑥 is] space
wherecompression,
𝑥 and 𝑥 are which
𝑥
compresses the D-dimensional space features into one-dimensional space. For example,
lower and upper bounds of the position
assume that there are three particles in the three-dimensional space, their positions are
The velocity of particle 𝑖, and 𝑣 ∈ [−𝑣 , 𝑣 ] where 𝑣 is the upper
p1 𝑣= {1,0,0}, p2 = {0,1,0} and p3 = {0,0,1}, then the mutual distance between any two particles
bound of the velocity
is {(p1 , p2 ) = 20.5 , (p1 , p3 ) = 20.5 , (p2 , p3 ) = 20.5 }, which has only one dimension. Observe that
The inertial weight and accelerations for PSO respectively, and 𝑐 is
ω, 𝑐 , 𝑐 , 𝑐
defined in Section 3.4
The position of Pbest, Gbest and Tbest, where Tbest denotes the best
𝑃,𝑃 ,𝑃
position over all historical rounds of evolutions
𝑁 The size of the swarm
Algorithms 2021, 14, 29 6 of 16
the three points are separated by a distance of 20.5 , but they are regarded as being aggre-
gated together since the information entropy of the distance information is 0 (Equations (3)
and (4)). Therefore, the information entropy based on distance cannot reflect the swarm
diversity very well. On the other hand, if space is divided into multiple small subspaces,
the complexity of traversing all subspaces is fairly high. For example, in the D-dimensional
space, if each dimension is divided into K equal parts, then the complexity of traveling all
subspaces is (O K D ), which is formidable when D is larger than 10.
To calculate the information entropy of the swarm efficiently and evaluate the degree of
dispersion of the swarm as accurately as possible, we propose a novel diversity Evaluation
scheme. The crucial steps of our approach are as follows:
For each particle in the swarm, compute the index (ID) of the subspace where the
particle is located. First of all, each dimension of the search space is divided into K equal
parts. Then the ID of a subspace is a string of D numbers, each of which represents the7 index
Algorithms 2021, 14, x FOR PEER REVIEW of 18
of the part of the dimension occupied by the subspace. For example, in the 3-dimensional
space in Figure 2, the ID of the blue subspace is {2, 3, 1}.
Therelation
Figure2.2.The
Figure relationof
ofthe
theID
IDof
ofthe
theblue
bluesubspace
subspaceand
andthe
thedimensions
dimensionsof
ofthe
thesearch
search space.
space.
Denote ci = c1i , c2i , . . . , ciD as the ID of subspace where the i-th particle is located,
Denote 𝑐 = {𝑐 , 𝑐 , … , 𝑐 } as the ID of subspace where the i-th particle is located,
where cd is obtained from Equation (5):
where 𝑐i is obtained from Equation (5):
& '
x𝑥id −−x𝑥lb
Ci𝐶= =
d (5)
(5)
((𝑥
xub −−x𝑥lb )/K)⁄𝐾
Build a hash table h by traversing the IDs obtained in the Step (1). h is composed of
Build a hash table h by traversing the IDs obtained in the Step (1). h is composed of
key-value pairs {key, value}, where key is the ID of the subspace, and value is the number
key-value pairs {key, value}, where key is the ID of the subspace, and value is the number
of particles in the subspace. We traverse the IDs of the subspaces obtained by the above
of particles in the subspace. We traverse the IDs of the subspaces obtained by the above
steps,
steps,ififaakey
keyccexists
existsin
inhhits
itsvalue
valueh(c)
h(c)isisincreased
increasedby by1,1,else
elseaanew
newkey-value
key-value pair
pair {c,1}
{c,1} is
is
inserted
insertedinto
intoh.h.
Determine
Determinethe theinformation
informationentropy
entropyEEof ofthe
theswarm
swarmbasedbasedon on h.
h. Note
Notethat
thatthe
theempty
empty
subspace without any particle contributes 0 to the calculation of information
subspace without any particle contributes 0 to the calculation of information entropy entropy since
the probability p of particles falling into this subspace is 0 by Equations (3)
since the probability p of particles falling into this subspace is 0 by Equations (3) and (4). and (4). There-
fore, only the
Therefore, subspace
only corresponding
the subspace to a key
corresponding to aofkey
h can
of hcontribute
can contributeto thetoinformation
the informationen-
tropy.
entropy.𝑝 pinq
Equation
in Equation (3) represents
(3) represents the
the probability
probability ofoffalling
falling into
into the
the subspace
subspace q,
q, and
and
h(c)⁄/N
𝑝pi ==ℎ(𝑐) 𝑁 .s Therefore,
. Therefore,EEcan
canbe beobtained
obtainedby byEquations
Equations(3) (3)and
and(4)(4)based
basedon onthe
thekeys
keys ofof hh
and can be used in evaluating the exploration degree in
and can be used in evaluating the exploration degree in Equation (8). Equation (8).
The time complexity of the above procedure is analyzed as follows: in step 1, we need
to calculate the 𝑐 of each particle i in each dimension d, so its time complexity is 𝑂(𝑁 𝐷),
where 𝑁 is the size of the swarm, and D is the dimension of the search space; in step 2,
inserting a key-value pair into hash table h can be done in O(1) time, the time of traversing
the IDs to establish hash table h is 𝑂(𝑁 ); in the last step, because only the key of the hash
table h needs to be traversed, and the number of IDs is no more than 𝑁 , the time com-
Algorithms 2021, 14, 29 7 of 16
1.2
1.0
0.8
0.6
0.4
0.2
0.0
Figure3.3.The
Figure Therelationship
relationshipbetween
betweenEEdand
andState.
State.
d
Weuse
We usethe
thediversity
diversityE(t)
E(t)totodetermine
determinethe theexact valueofofE𝐸
exactvalue d . .When
WhenE(t)
E(t)isishigh,
high,itit
indicatesthat
indicates thatthe
theswarm
swarmmaymaystill
still explore
explore a wider
a wider area
area andand
thethe corresponding
corresponding valuevalue
of Eof
d
𝐸 should be larger, and vice versa. To mimic this, we develop a linear function to
should be larger, and vice versa. To mimic this, we develop a linear function to express the express
the relationship
relationship between
between 𝐸 E(t)
Ed and andinE(t) in Equation
Equation (8). (8).
In the following, we show how to realize an adaptive update of the inertial weight
and, therefore, the balance of the exploration and exploitation by Ed (t).
1
ω (t) = ∈ [0.4, 0.9] ∀ Ed (t) ∈ [0, 1] (9)
1 + 0.67e−2.67Ed (t)
best position of the swarm of the current round of evolution (with a single initialization
or reinitialization), and let Tbest denotes the best position of the swarm for all historical
rounds of evolutions, thus Tbest is at least as good as Gbest. Finally, we add the Tbest term
to the particle update in Equation (10), which ensures that the optimal position (Tbest) has
only a moderate effect on the swarm with less attraction power.
vid (t + 1) = ωvid (t) + c1 r1 Pid (t) − xid (t)
+c2 r2 Pgd (t) − xid (t) (10)
d d
+c3 r3 Pt (t) − xi (t)
Step 3: Replace the position of poor particles with the new position generated by the
disturbance update mode.
Algorithms 2021, 14, 29 10 of 16
Another important parameter is the swarm size Ns . We choose five values (20, 30, 40,
50 and 60) to conduct experiments. And the other parameters are set as follows: Ntotal =
FEs/Ns , Nsmin = 0.01Ntotal , Nsmax = 0.1Ntotal , α = 0.01, Numax = 0.01Ntotal , vub =
0.01( xub − xlb ). The results are shown in Table 4. The average ranking shows that PSO-ED
performs best when Ns = 40.
SIZE 20 30 40 50 60
Item Mean Rank Mean Rank Mean Rank Mean Rank Mean Rank
F1 8.65 × 104 2 6.97 × 104 1 3.03 × 105 5 1.20 ×105 3 1.27 ×105 4
F2 2.31 × 103 4 2.45 × 103 5 2.24 × 102 1 2.08 × 103 3 1.94 × 103 2
F3 2 × 101 1 2 × 101 1 2 × 101 1 2 × 101 1 2 × 101 1
F4 2.21 × 102 5 1.99 × 102 4 1.42 × 102 1 1.82 × 102 3 1.76 × 102 2
F5 4.01 × 103 3 4.17 × 103 5 3.10 × 103 1 3.96 × 103 2 4.02 × 103 4
F6 2.96 × 104 4 2.28 × 104 1 3.05 × 104 5 2.86 × 104 3 2.78 × 104 2
F7 1.34 × 101 5 1.22 × 101 4 1.04 × 101 1 1.14 × 101 2 1.19 × 101 3
F8 1.84 × 104 3 1.54 × 104 2 1.46 × 104 1 2.15 × 104 4 2.62 × 104 5
F9 2.37 × 102 5 1.57 × 102 2 1.04 × 102 1 1.62 × 102 3 1.95 × 102 4
F10 9.28 × 102 3 9.47 × 102 5 3.35 × 102 1 8.79 × 102 2 9.37 × 102 4
F11 3.29 × 104 2 3.34 × 104 4 2.54 × 104 1 3.35 × 104 5 3.31 × 104 3
F12 1 × 102 1 1 × 102 1 1 × 102 1 1 × 102 1 1 × 102 1
AR 3.17 2.92 1.67 2.67 2.92
the same best mean values with other 5 algorithms on 2 functions (F9 and F12 ). However,
compared with MPSO on F3 , F4 , F5 , DMSPSO on F8 , FIPS on F7 , CLPSO on F10 , our PSO-ED
is slightly weaker.
Table 6. Comparison of PSO-ED with 7 popular PSO variants for 10-D case.
Table 7. Comparison of PSO-ED with 7 popular PSO variants for 30-D case.
As shown in Table 7 for 30-D case, compared to the counterparts, out of the total of
12 functions, our algorithm obtains the best mean values on 6 functions (F2 , F3 , F6 , F8 , F11
and F12 ). However, PSO-ED is slight weaker than MPSO on F4 , F5 , F8 , SPSO on F1 .
As shown in Table 8, in the Non-parametric Wilcoxon signed-rank test of PSO-ED and
Algorithms 2021, 14, 29 13 of 16
other algorithms, PSO-ED have achieved obvious advantages on 10-D and 30-D except
that MPOS and PSO-ED achieved the same optimal result on 10-D.
Table 8. Statistical analysis of Wilcoxon signed-rank test between PSO-ED and other PSO variants.
In conclusion, although PSO-ED is weak in solving a few test functions, its results are
superior to those of popular PSO variants such as MPSO, DMSPSO, FIPS, CLPSO, SPSO,
LPSO, and GPSO.
5. Conclusions
In this paper, we proposed a novel measure of diversity based on sub-space encoding
for the search space; a notion of exploration degree based on the diversity in the exploration,
exploitation and convergence states, which efficiently evaluates the degree of demand for
the dispersion of the swarm; a technique of disturbance update mode for updating the
poor-performed particles’ positions to save the cost of function evaluations (FEs) on them.
Since the diversity evaluation can reflect the swarm distribution very well, it can provide
a better basis for adaptive parameter adjustment strategy and assist the swarm to jump
out of the local traps. Therefore, this method is more suitable for the complex multimodal
optimization.
The effectiveness of the developed techniques was validated through a set of bench-
mark functions in CEC2015. Compared with 7 popular PSO variants, out of the 12 bench-
mark functions, PSO-ED obtains 6 best results for both the 10-D and 30-D cases.
However, the stability of the developed PSO-ED can be further improved and is
worthy of investigation in the future work. For example, for F12 in 10-D and 30-D, although
PSO-ED and SPSO both achieve the optimal mean, PSO-ED is slightly weaker in the std
term, which means that PSO-ED suffers from risk of falling into some local optimum.
Author Contributions: Conceptualization, H.Z. and X.W.; methodology, H.Z. and X.W.; software,
H.Z.; validation, H.Z. and X.W.; formal analysis, H.Z. and X.W.; investigation, H.Z. and X.W.;
resources, H.Z. and X.W.; data curation, H.Z. and X.W.; writing—original draft preparation, H.Z. and
X.W.; writing—review and editing, H.Z. and X.W.; visualization, H.Z.; supervision, X.W.; project
administration, X.W.; funding acquisition, X.W. All authors have read and agreed to the published
version of the manuscript.
Funding: This work was supported in part by the Science and Technology Commission of Shanghai
Municipality Fund No. 18510745700.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Algorithms 2021, 14, 29 14 of 16
Acknowledgments: This work was supported in part by the Science and Technology Commission of
Shanghai Municipality Fund No. 18510745700.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the ICNN’95 International Conference on Neural
Networks, Perth, WA, Australia, 27 November—1 December 1995; pp. 1942–1948.
2. Eberhart, R.; Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the MHS’95 Sixth International
Symposium on Micro Machine and Human Science, Nagoya, Japan, 4–6 October 1995; pp. 39–43.
3. Kuila, P.; Jana, P.K. Energy efficient clustering and routing algorithms for wireless sensor networks: Particle swarm optimization
approach. Eng. Appl. Artif. Intell. 2014, 33, 127–140. [CrossRef]
4. Shen, M.; Zhan, Z.H.; Chen, W.N.; Gong, Y.J.; Zhang, J.; Li, Y. Bi-velocity discrete particle swarm optimization and its application
to multicast routing problem in communication networks. IEEE Trans. Ind. Electron. 2014, 61, 7141–7151. [CrossRef]
5. Zhang, Y.; Wang, S.; Ji, G.; Dong, Z. An MR Brain Images Classifier System via Particle Swarm Optimization and Kernel Support
Vector Machine. Sci. World J. 2013, 2013, 130–134. [CrossRef] [PubMed]
6. Tan, T.Y.; Zhang, L.; Lim, C.P.; Fielding, B.; Yu, Y.; Anderson, E. Evolving Ensemble Models for Image Segmentation Using
Enhanced Particle Swarm Optimization. IEEE Access 2019, 7, 34004–34019. [CrossRef]
7. Sakri, S.B.; Rashid, N.B.A.; Zain, Z.M. Particle Swarm Optimization Feature Selection for Breast Cancer Recurrence Prediction.
IEEE Access 2018, 6, 29637–29647. [CrossRef]
8. Dhinesh Babu, L.D.; Venkata Krishna, P. Honey bee behavior inspired load balancing of tasks in cloud computing environments.
Appl. Soft. Comput. J. 2013, 13, 2292–2303.
9. Wang, Z.J.; Zhan, Z.H.; Kwong, S.; Jin, H.; Zhang, J. Adaptive Granularity Learning Distributed Particle Swarm Optimization for
Large-Scale Optimization. IEEE Trans. Cybern. 2020, 1–14. [CrossRef]
10. Sharafi, M.; ELMekkawy, T.Y. Multi-objective optimal design of hybrid renewable energy systems using PSO-simulation based
approach. Renew. Energy 2014, 68, 67–79. [CrossRef]
11. Cabrerizo, F.J.; Herrera-Viedma, E.; Pedrycz, W. A method based on PSO and granular computing of linguistic information to
solve group decision making problems defined in heterogeneous contexts. Eur. J. Oper. Res. 2013, 230, 624–633. [CrossRef]
12. Zhang, X.; Du, K.J.; Zhan, Z.H.; Kwong, S.; Gu, T.L.; Zhang, J. Cooperative Coevolutionary Bare-Bones Particle Swarm
Optimization with Function Independent Decomposition for Large-Scale Supply Chain Network Design with Uncertainties.
IEEE Trans. Cybern. 2019, 50, 4454–4468. [CrossRef]
13. Xue, Y.; Tang, T.; Liu, A.X. Large-Scale Feedforward Neural Network Optimization by a Self-Adaptive Strategy and Parameter
Based Particle Swarm Optimization. IEEE Access 2019, 7, 52473–52483. [CrossRef]
14. Ali, M.H.; Al Mohammed, B.A.D.; Ismail, A.; Zolkipli, M.F. A New Intrusion Detection System Based on Fast Learning Network
and Particle Swarm Optimization. IEEE Access 2018, 6, 20255–20261. [CrossRef]
15. Zhan, Z.H.; Zhang, J.; Li, Y.; Chung, H.S.H. Adaptive particle swarm optimization. IEEE Trans. Syst. Man. Cybern Part. B Cybern.
2009, 39, 1362–1381. [CrossRef]
16. Gou, J.; Lei, Y.X.; Guo, W.P.; Wang, C.; Cai, Y.Q.; Luo, W. A novel improved particle swarm optimization algorithm based on
individual difference evolution. Appl. Soft Comput. J. 2017, 57, 468–481. [CrossRef]
17. Niu, B.; Zhu, Y.; He, X.; Wu, H. MCPSO: A multi-swarm cooperative particle swarm optimizer. Appl. Math. Comput. 2007, 185,
1050–1062. [CrossRef]
18. Wang, L.; Yang, B.; Chen, Y. Improving particle swarm optimization using multi-layer searching strategy. Inf. Sci. 2014, 274, 70–94.
[CrossRef]
19. Han, F.; Liu, Q. A diversity-guided hybrid particle swarm optimization based on gradient search. Neurocomputing 2014, 137,
234–240. [CrossRef]
20. Zhao, F.; Tang, J.; Wang, J.; Jonrinaldi. An improved particle swarm optimization with decline disturbance index (DDPSO) for
multi-objective job-shop scheduling problem. Comput. Oper. Res. 2014, 45, 38–50. [CrossRef]
21. Chen, Y.; Li, L.; Peng, H.; Xiao, J.; Wu, Q. Dynamic multi-swarm differential learning particle swarm optimizer. Swarm Evol.
Comput. 2018, 39, 209–221. [CrossRef]
22. Lynn, N.; Ali, M.Z.; Suganthan, P.N. Population topologies for particle swarm optimization and differential evolution. Swarm
Evol. Comput. 2018, 39, 24–35. [CrossRef]
23. Zhang, L.; Tang, Y.; Hua, C.; Guan, X. A new particle swarm optimization algorithm with adaptive inertia weight based on
Bayesian techniques. Appl. Soft Comput. J. 2015, 28, 138–149. [CrossRef]
24. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Self regulating particle swarm optimization algorithm. Inf. Sci. 2015, 294, 182–202.
[CrossRef]
25. Taherkhani, M.; Safabakhsh, R. A novel stability-based adaptive inertia weight for particle swarm optimization. Appl. Soft Comput.
J. 2016, 38, 281–295. [CrossRef]
26. Kennedy, J.; Mendes, R. Population structure and particle swarm performance. In Proceedings of the 2002 Congress on
Evolutionary Computation, Honolulu, HI, USA, 12–17 May 2002; pp. 1671–1676.
Algorithms 2021, 14, 29 15 of 16
27. Mendes, R.; Kennedy, J.; Neves, J. The fully informed particle swarm: Simpler, maybe better. IEEE Trans. Evol. Comput. 2004, 8,
204–210. [CrossRef]
28. Cooren, Y.; Clerc, M.; Siarry, P. Performance evaluation of TRIBES, an adaptive particle swarm optimization algorithm. Swarm
Intell. 2009, 3, 149–178. [CrossRef]
29. Kennedy, J. Small worlds and mega-minds: Effects of neighborhood topology on particle swarm performance. In Proceedings
of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; pp.
1931–1938.
30. Suganthan, P.N. Particle swarm optimiser with neighbourhood operator. In Proceedings of the 1999 Congress on Evolutionary
Computation-CEC99 (Cat. No. 99TH8406), Washington, DC, USA, 6–9 July 1999; pp. 1958–1962.
31. Bonyadi, M.R.; Li, X.; Michalewicz, Z. A hybrid particle swarm with a time-adaptive topology for constrained optimization.
Swarm Evol. Comput. 2014, 18, 22–37. [CrossRef]
32. Zhang, J.; Zhu, X.; Wang, Y.; Zhou, M. Dual-Environmental Particle Swarm Optimizer in Noisy and Noise-Free Environments.
IEEE Trans. Cybern. 2019, 49, 2011–2021. [CrossRef]
33. van den Bergh, F.; Engelbrecht, A.P. A cooperative approach to participle swam optimization. IEEE Trans. Evol. Comput. 2004, 8,
225–239. [CrossRef]
34. Blackwell, T.; Branke, J. Multi-Swarm Optimization in Dynamic Environments. In Workshops on Applications of Evolutionary
Computation; Springer: Berlin/Heidelberg, Germany, 2004; pp. 489–500.
35. Liang, J.J.; Suganthan, P.N. Dynamic multi-swarm particle swarm optimizer with local search. In Proceedings of the 2005 IEEE
Congress on Evolutionary Computation, Scotland, UK, 2–5 September 2005; pp. 522–528.
36. Zhao, S.Z.; Liang, J.J.; Suganthan, P.N.; Tasgetiren, M.F. Dynamic multi-swarm particle swarm optimizer with local search for
large scale global optimization. In Proceedings of the 2008 IEEE Congress on Evolutionary Computation, Hong Kong, China, 1–6
June 2008; pp. 3845–3852.
37. Xu, X.; Tang, Y.; Li, J.; Hua, C.; Guan, X. Dynamic multi-swarm particle swarm optimizer with cooperative learning strategy. Appl.
Soft Comput. J. 2015, 29, 169–183. [CrossRef]
38. Mirjalili, S.; Mohd Hashim, S.Z.; Moradian Sardroudi, H. Training feedforward neural networks using hybrid particle swarm
optimization and gravitational search algorithm. Appl. Math. Comput. 2012, 218, 11125–11137. [CrossRef]
39. Zhang, J.R.; Zhang, J.; Lok, T.M.; Lyu, M.R. A hybrid particle swarm optimization-back-propagation algorithm for feedforward
neural network training. Appl. Math. Comput. 2007, 185, 1026–1037. [CrossRef]
40. Nagra, A.A.; Han, F.; Ling, Q.-H.; Mehta, S. An Improved Hybrid Method Combining Gravitational Search Algorithm with
Dynamic Multi Swarm Particle Swarm Optimization. IEEE Access 2019, 7, 50388–50399. [CrossRef]
41. Zhan, Z.H.; Zhang, J.; Li, Y.; Shi, Y.H. Orthogonal learning particle swarm optimization. IEEE Trans. Evol. Comput. 2011, 15,
832–847. [CrossRef]
42. Garg, H. A hybrid PSO-GA algorithm for constrained optimization problems. Appl. Math. Comput. 2016, 274, 292–305. [CrossRef]
43. Plevris, V.; Papadrakakis, M. A Hybrid Particle Swarm-Gradient Algorithm for Global Structural Optimization. Comput. Civ.
Infrastruct Eng. 2011, 26, 48–68. [CrossRef]
44. Singh, N.; Singh, S.B. Hybrid Algorithm of Particle Swarm Optimization and Grey Wolf Optimizer for Improving Convergence
Performance. J. Appl. Math. 2017. [CrossRef]
45. Raju, M.; Gupta, M.K.; Bhanot, N.; Sharma, V.S. A hybrid PSO–BFO evolutionary algorithm for optimization of fused deposition
modelling process parameters. J. Intell. Manuf. 2019, 30, 2743–2758. [CrossRef]
46. Sivanandam, S.N.; Visalakshi, P. Dynamic task scheduling with load balancing using parallel orthogonal particle swarm
optimization. Int. J. Bio-Inspired Comput. 2009, 1, 276–286. [CrossRef]
47. Kang, Q.; Xiong, C.; Zhou, M.; Meng, L. Opposition-Based Hybrid Strategy for Particle Swarm Optimization in Noisy Environ-
ments. IEEE Access 2018, 6, 21888–21900. [CrossRef]
48. Cao, Y.; Zhang, H.; Li, W.; Zhou, M.; Zhang, Y.; Chaovalitwongse, W.A. Comprehensive Learning Particle Swarm Optimization
Algorithm with Local Search for Multimodal Functions. IEEE Trans. Evol. Comput. 2019, 23, 718–731. [CrossRef]
49. Shi, Y.; Eberhart, R.C. A modified particle swarm optimizer. In Proceedings of the 1998 IEEE International Conference on
Evolutionary Computation Proceedings, IEEE world congress on computational intelligence (Cat. No. 98TH8360), Anchorage,
AK, USA, 4–9 May 1988; pp. 69–73.
50. Chih, M.; Lin, C.J.; Chern, M.S.; Ou, T.Y. Particle swarm optimization with time-varying acceleration coefficients for the
multidimensional knapsack problem. Appl. Math. Model. 2014, 38, 1338–1350. [CrossRef]
51. Ratnaweera, A.; Halgamuge, S.K.; Watson, H.C. Self-organizing hierarchical particle swarm optimizer with time-varying
acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [CrossRef]
52. Wu, Y.; Gao, X.Z.; Huang, X.L.; Zenger, K. A hybrid optimization method of Particle Swarm Optimization and Cultural
Algorithm. In Proceedings of the 2010 6th International Conference on Natural Computation, Yantai, China, 10–12 August 2010;
pp. 2515–2519.
53. Xu, M.; You, X.; Liu, S. A Novel Heuristic Communication Heterogeneous Dual Population Ant Colony Optimization Algorithm.
IEEE Access 2017, 5, 18506–18515. [CrossRef]
54. Netjinda, N.; Achalakul, T.; Sirinaovakul, B. Particle Swarm Optimization inspired by starling flock behavior. Appl. Soft. Comput.
J. 2015, 35, 411–422. [CrossRef]
Algorithms 2021, 14, 29 16 of 16
55. Fang, W.; Sun, J.; Chen, H.; Wu, X. A decentralized quantum-inspired particle swarm optimization algorithm with cellular
structured population. Inf. Sci. 2016, 330, 19–48. [CrossRef]
56. Zhu, J.; Lin, Y.; Lei, W.; Liu, Y.; Tao, M. Optimal household appliances scheduling of multiple smart homes using an improved
cooperative algorithm. Energy 2019, 171, 944–955. [CrossRef]
57. Zhang, W.X.; Chen, W.N.; Zhang, J. A dynamic competitive swarm optimizer based-on entropy for large scale optimization.
In Proceedings of the 2016 Eighth International Conference on Advanced Computational Intelligence (ICACI), Chiang Mai,
Thailand, 14–16 February 2016; pp. 365–371.
58. Ran, M.P.; Wang, Q.; Dong, C.Y. A dynamic search space Particle Swarm Optimization algorithm based on population entropy. In
Proceedings of the 26th Chinese Control and Decision Conference (2014 CCDC), Changsha, China, 31 May—2 June 2014; pp.
4292–4296.
59. Tang, K.; Li, Z.; Luo, L.; Liu, B. Multi-strategy adaptive particle swarm optimization for numerical optimization. Eng. Appl. Artif.
Intell. 2015, 37, 9–19. [CrossRef]
60. Solteiro Pires, E.J.; Machado, J.A.T.; de Moura Oliveira, P.B. Entropy diversity in multi-objective particle swarm optimization.
Entropy 2013, 15, 5475–5491. [CrossRef]
61. Solteiro Pires, E.J.; Tenreiro Machado, J.A.; de Moura Oliveira, P.B. Dynamic shannon performance in a multiobjective particle
swarm optimization. Entropy 2019, 21, 1–10.
62. Solteiro Pires, E.J.; Tenreiro Machado, J.A.; de Moura Oliveira, P.B. PSO Evolution Based on a Entropy Metric. Adv. Intell. Syst.
Comput. 2020, 923, 238–248.
63. Olorunda, O.; Engelbrecht, A.P. Measuring exploration/exploitation in particle swarms using swarm diversity. In Proceedings of
the 2008 IEEE Congress on Evolutionary Computation, Hong Kong, China, 1–6 June 2008; pp. 1128–1134.
64. Riget, J.; Vesterstrøm, J.S. A Diversity-Guided Particle Swarm Optimizer—the ARPSO; Technical Report; (riget: 2002: DGPSO), no. 2
EVA Life; Department of Computer Science, University of Aarhus: Aarhus, Denmark, 2002.
65. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [CrossRef]
66. Eberhart, R.C.; Shi, Y. Comparing inertia weights and constriction factors in particle swarm optimization. In Proceedings of the
2000 Congress on Evolutionary Computation, LA Jolla, CA, USA, 16–19 July 2000; pp. 84–88.
67. Xu, G.; Cui, Q.; Shi, X.; Ge, H.; Zhan, Z.H.; Lee, H.P. Particle swarm optimization based on dimensional learning strategy. Swarm
Evol. Comput. 2019, 45, 33–51. [CrossRef]
68. Kaucic, M. A multi-start opposition-based particle swarm optimization algorithm with adaptive velocity for bound constrained
global optimization. J. Glob. Optim. 2013, 55, 165–188. [CrossRef]
69. Zhu, J.; Lauri, F.; Koukam, A.; Hilaire, V. Scheduling optimization of smart homes based on demand response. In IFIP International
Conference on Artificial Intelligence Applications and Innovations; Springer: Berlin/Heidelberg, Germany, 2015; Volume 458, pp.
223–236.
70. Tian, D. Particle swarm optimization with chaos-based initialization for numerical optimization. Intell. Autom. Soft Comput. 2018,
24, 331–342. [CrossRef]
71. Ye, W.; Feng, W.; Fan, S. A novel multi-swarm particle swarm optimization with dynamic learning strategy. Appl. Soft Comput. J.
2017, 61, 832–843. [CrossRef]
72. Tian, D.; Shi, Z. MPSO: Modified particle swarm optimization and its applications. Swarm Evol. Comput. 2018, 41, 49–68.
[CrossRef]
73. Liang, J.J.; Qu, B.; Suganthan, P.; Chen, Q. Problem Definitions and Evaluation Criteria for the CEC 2015 Competition on Learning-Based
Real-Parameter Single Objective Optimization; Technical Report 201411A; Computational Intelligence Laboratory, Zhengzhou
University: Zhengzhou, China; Nanyang Technological University: Singapore, 2014.
74. Bratton, D.; Kennedy, J. Defining a Standard for Particle Swarm Optimization. In Proceedings of the 2007 IEEE Swarm Intelligence
Symposium, Honolulu, HI, USA, 1–5 April 2007; pp. 120–127.
75. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particle swarm optimizer for global optimization of
multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [CrossRef]