Sasc D 24 00394
Sasc D 24 00394
Full Title: Empirical Analysis and Improvement of the PSO-Sono Optimization Algorithm
Hui Wang
Abstract: PSO-sono is a recent and promising variant of the Particle Swarm Optimization (PSO)
algorithm. It outperforms other popular PSO variants on many benchmark test sets. In
this paper, we investigate the performance of PSO-sono on more problems (including
21 real-world optimization problems). Moreover, we propose a new, more powerful yet
simpler and more efficient variant of PSO-sono, called IPSO-sono. The proposed
approach uses ring topology, non-linear ratio reduction and opposition-based learning
to improve the performance of PSO-sono. The proposed approach is compared with
other state-of-the-art meta-heuristic algorithms on 12 IEEE CEC 2022 and 21 real-
world problem defined in the IEEE CEC 2011. The results show that IPSO-sono
outperforms PSO-sono on most problems and performs well compared to other state-
of-the-art approaches.
Suggested Reviewers:
Opposed Reviewers:
Additional Information:
Question Response
Free Preprint Service YES, I want to share my research early and openly as a preprint.
Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
open access research repository, at the
time of submission. Once on SSRN, your
paper will benefit from early registration
with a DOI and early dissemination that
facilitates collaboration and early citations.
It will be available free to read regardless
of the publication decision made by the
journal. This will have no effect on the
editorial process or outcome with the
journal. Please consult the SSRN Terms
of Use and FAQs.
To complete your submission you must Data will be made available on request.
select a statement which best reflects the
availability of your research data/code.
IMPORTANT: this statement will be
published alongside your article. If you
have selected "Other", the explanation
text will be published verbatim in your
article (online and in the PDF).
Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Cover Letter
Dear Professor,
I am writing to submit a research paper titled " Empirical Analysis and Improvement of the PSO-
Sono Optimization Algorithm" for consideration in your respected Journal. This paper proposes a
new, more powerful yet simpler and more efficient variant of the recent PSO-sono algorithm,
called IPSO-sono.
I have not previously published any of this paper in another journal, nor is it currently under
review elsewhere. All the work contained within the article is our own original creation.
Thank you for taking the time to consider our manuscript for inclusion in your respected journal.
Sincerely yours,
Mahamed G. H. Omran
Kuwait
[email protected]
Declaration of Interest Statement
Declaration of interests
☐The authors declare that they have no known competing financial interests or personal relationships
that could have appeared to influence the work reported in this paper.
☒The authors declare the following financial interests/personal relationships which may be considered
as potential competing interests:
I am in the editorial board of this journal. If there are other authors, they declare that they have no
known competing financial interests or personal relationships that could have appeared to influence
the work reported in this paper.
Manuscript Click here to view linked References
China, [email protected]
* Corresponding author
Abstract
PSO-sono is a recent and promising variant of the Particle Swarm Optimization (PSO) algorithm. It outper-
forms other popular PSO variants on many benchmark test sets. In this paper, we investigate the performance
of PSO-sono on more problems (including 21 real-world optimization problems). Moreover, we propose a
new, more powerful yet simpler and more efficient variant of PSO-sono, called IPSO-sono. The proposed
approach uses ring topology, non-linear ratio reduction and opposition-based learning to improve the perfor-
mance of PSO-sono. The proposed approach is compared with other state-of-the-art meta-heuristic algorithms
on 12 IEEE CEC 2022 and 21 real-world problem defined in the IEEE CEC 2011. The results show that
IPSO-sono outperforms PSO-sono on most problems and performs well compared to other state-of-the-art
approaches.
keywords— Meta-heuristics; Optimization; Particle Swarm Optimization; PSO-sono; sustainability.
1 Introduction
Consider the following optimization problem:
where f (x) is the objective function to be minimized, x is a vector of decision variables in RD , and lj and
uj are the lower and upper bounds, respectively, for the j-th component of x. The goal is to find the value of x
that minimizes the objective function f (x) while satisfying the boundary constraints.
Optimization problems arise in various fields of science, engineering, economics, and many other domains,
where the goal is to find the best solution among a set of feasible solutions. Optimization problems can be
categorized as linear or nonlinear, continuous or discrete, and deterministic or stochastic. In general, finding
the global optimal solution for complex optimization problems is a challenging task, as the number of possible
solutions increases exponentially with the problem size.
Meta-heuristics ([Yan14], [DSKC19] and [HMSCS19]) are a class of optimization algorithms that can han-
dle complex optimization problems with multiple local optima, discontinuous objective functions, and con-
straints. Meta-heuristics are iterative search algorithms that use heuristics or approximate methods to guide the
search towards the optimal solution. Unlike exact methods, such as linear programming or dynamic program-
ming, meta-heuristics do not guarantee finding the global optimal solution, but rather aim to find a good solution
within a reasonable time frame.
Meta-heuristics can be divided into two categories: population-based and single-solution-based methods.
Population-based methods, such as Genetic Algorithms (GA) [Gol89], Particle Swarm Optimization (PSO)
[KE95], Differential Evolution (DE) [SP95] and Ant Colony Optimization (ACO) ([MD99] and [SD08]), main-
tain a set of candidate solutions called the population, and iteratively update the population using various opera-
tors such as selection, crossover, mutation, and local search. Single-solution-based methods, such as Simulated
1
Annealing [SK21], Tabu Search [GL98] and Variable Neighborhood Search [HM99], start with a single initial
solution and iteratively update it by moving to a neighboring solution that improves the objective function value.
Meta-heuristics have been applied to various optimization problems in diverse fields, such as engineering
design, transportation, finance, scheduling and bioinformatics [OVRDS+ 21]. They have also inspired the de-
velopment of hybrid and adaptive methods that combine the strengths of different algorithms and adapt their
parameters dynamically based on the problem characteristics.
A recent and promising meta-heuristic is PSO-sono [MZML22], which is a variant of the PSO algorithm.
According to the results reported in [MZML22], PSO-sono generally outperforms other popular PSO-variants
on many benchmark test sets (namely, IEEE CEC 2013, 2014 and 2017) consisting of 88 real-parameter single-
objective optimization functions. In this paper, we investigate the performance of PSO-sono on more problems,
i.e. IEEE CEC 2022 and the 21 IEEE CEC 2011 real-world optimization problems. We believe that testing any
optimizer on real-world problems is essential to validate its performance. Moreover, we propose some changes
to the original algorithm to improve its performance. Another objective is to replace complex operations in
PSO-sono with simpler operations without deteriorating the performance of the optimizer.
Hence, the contributions of this work are
• investigate the performance of PSO-sono on more problems, especially on real-world problem;
• propose major and minor changes to the original algorithm to improve its performance and
• make the PSO-sono easier to implement and simpler to understand by replacing relatively complex oper-
ations with simpler ones without degrading the performance of the algorithm.
The rest of the paper is organized as follows. Section 2 describes the PSO algorithm and its variants.
The PSO-sono algorithm is explained in Section 3. In Section 4, the proposed approach is presented. The
results of the proposed algorithm, in comparison with other approaches from the literature, on the IEEE CEC
2022 benchmark functions and IEEE CEC 2011 real-world problems, are reported and discussed in Section 5.
Finally, Section 6 concludes the paper and highlight possible future work.
vi,j (t + 1) = ωvi,j (t) + c1 r1 (pi,j − xi,j (t)) + c2 r2 (gj − xi,j (t)) (2)
2
The algorithm terminates when a stopping criterion is met, such as reaching a maximum number of iterations
or achieving a satisfactory solution. The final solution is the position of the particle with the best-known fitness
value.
PSO has several advantages over other optimization algorithms, such as its simplicity, fast convergence and
ability to handle multimodal and non-linear functions. However, it also has some limitations, such as premature
convergence and sensitivity to parameter settings [Eng07].
Several PSO variants have been proposed recently. One popular variant is the Comprehensive Leaning PSO
(CLPSO) proposed by [LQSB06]. In CLPSO, an exemplar particle is chosen from the swarm and each di-
mension of each particle either learns from the corresponding dimension of the exemplar or the particle’s own
personal best solution. Furthermore, in CLPSO, ω follows a linear reduction from 0.9 to 0.4. An extension
of CLPSO was proposed by [NDM+ 12], which uses a ring topology that changed during the iteration by some
intervals. Furthermore, a regrouping operation of the particles is conducted every few iterations. Rather than
using the global and personal best experience of the swarm, Social Learning PSO (SLPSO) [CJ15] uses a novel
social learning approach where each particle learns from any particle with a better objective function value.
Another PSO variant, Ensemble PSO (EPSO), proposed by [LS17] uses five basic update equations from differ-
ent PSO variants to tackle optimization problems. The EPSO uses a self-adaptive selection process to choose
the most appropriate update equation used by the particles in the larger sub-population in each iteration. The
MPSO algorithm [LZT20] uses a chaos-based non-linear inertia weight to "balance" exploration and exploita-
tion. Furthermore, MPSO employs an adaptive strategy-based update equation to enhance the performance of
PSO when applied to complex optimization problems.
A recent survey of PSO and its variants can be found in [SESA+ 22]. Moreover, [PLK+ 22] investigated the
impact of swarm topology on the performance of PSO and its variants.
vi (t + 1) = ω(t) ⋅ vi (t) + c1 (t) ⋅ r1 ⋅ (pi (t) − xi (t)) + c2 (t) ⋅ r2 ⋅ (g(t) − xi (t)), (7)
and
xi (t + 1) = xi (t) + vi (t + 1). (8)
where r1 , r2 ∼ U (0, 1) and ωt , c1,t and c2,t are updated as follows,
t
ω(t) = ωmax − ⋅ (ωmax − ωmin ), (9)
T
t
c1,t = 2.5 − 2 ⋅ , (10)
T
and
t
c2,t = 0.5 + 2 ⋅ (11)
T
where T is the maximum number of iterations. The other worse-particle group uses another update paradigm:
3
and
where ⊗ is the element-wise (i.e. Hadamard product), pi = (∑nk=1 (ϕk ⊗ xnb,k (t))/n) ⊘ ϕ, xnb,k (t) is a
neighbor particle of xi (t) in the ring topology, ⊘ is the element-wise division and ϕ is the acceleration weight,
i.e. ϕ = ∑nk=1 ϕk where ϕk ∼ U ([0, 4.1/n]) and n is the size of the neighborhood, typically set to 2.
The pseudocode of the PSO-sono algorithm is listed in Alg. 1.
The PSO-sono has been compared with the PSO variants discussed in Sec. 2 on a large test set containing all
the functions from CEC 2013, CEC 2014 and CEC 2017. The results show that PSO-sono generally outperforms
the other PSO-variants on most tested problems.
4
Algorithm 1: Pseudocode for the PSO-sono Algorithm
input : Solution space [l, u], Velocity [Vmin , Vmax ] and maximum number of function evaluations
nf emax .
output: Number of function evaluations, nf e, best solution g and best fitness value f (g).
1 t ← 1;
2 for i ← 1 to N do
3 Initialize particle i velocity vi (t) ∼ U ([Vmin , Vmax ])D ;
4 Initialize the i-th particle, xi (t) ∼ U ([l, u])D ;
5 Calculate the fitness value f (xi (t)) ;
6 nf e ← N ;
7 Label g(t) and f (g) ;
8 while nf e < nf emax do
9 Sort the swarm in descending order ;
10 Calculate the parameters according to Eqs. 9, 10 and 11 ;
11 Separate it into two groups according to r ;
12 Update particles in the better-particle group according to Eqs. 7 and 8 ;
13 Update particles in the worst-particle group according to Eqs. 12 and 13 ;
14 for i ← 1 to N do
15 Calculate the fitness value f (xi (t)) ;
16 nf e ← nf e + N ;
17 Update the ratio r according to Eq. 6 ;
18 Apply fully-informed search according to Eq. 16 ;
19 nf e ← nf e + 1;
20 Update g(t) and best fitness value f (g(t)). ;
2. Instead of being initialized randomly within the range of [Vmin , Vmax ] as in PSO-sono, the initial velocity
of each particle is set to zero. In real world, the velocity of physical objects in their initial positions is
zero. Particles initialized with non-zero velocities violate this analogy [Eng07]. Moreover, this is not
needed given the fact that particles’ positions are randomly initialized, ensuring random positions and
moving directions [Eng07].
3. Replacing t/T with nf e/nf emax . Most meta-heuristic algorithms iterates until a maximum number of
function evaluations, i.e. nf emax , is reached. Finding the maximum number of iterations, T , is not
always straightforward since some meta-heuristics perform variable number of function evaluations in
each iteration. Thus, replacing T with nf emax is more useful.
The effect of the above minor changes on the performance of PSO-sono is depicted in Fig. 1. The Figure
shows that initializing the velocities of particles to zero allows us to remove two parameters, i.e. Vmin and Vmax
without degrading the performance of PSO-sono as shown in the left column of Fig. 1. The right column shows
that using nf e/nf emax is at least as good as using t/T .
Figure 1: The effect of the minor changes on the performance of PSO-sono, "wins/draws/loses" means the
proposed change wins, draws and loses, respectively, against the original PSO-sono.
5
4.2 Major Changes
After introducing the minor changes and showing their usefulness, we integrate them into the PSO-sono algo-
rithm. In this section, we will introduce some major changes to the Algorithm and show their usefulness.
vi (t + 1) = ω(t) ⋅ vi (t) + c1 (t) ⋅ r1 ⋅ (pi (t) − xi (t)) + c2 (t) ⋅ r2 ⋅ (gi (t) − xi (t)), (19)
where gi is the best particle in the neighborhood Ni , which is defined as
6
Figure 3: A ring topology with 6 particles.
nf e
r = 1 − 0.9 ⋅ (0.1/0.9) nf emax . (22)
Thus, the focus will first be on exploration, i.e. 90% of the particles use Eq. 12, then on exploitation, i.e.
90% of the swarm use Eq. 7.
Figure 4 illustrates the difference between the two approaches.
r
Linear
0.8 Non-Linear
0.6
0.4
0.2
nfe
200 400 600 800 1000
Fig. 5 compares the performance of PSO-sono using the original approach (Eq. 6), the linear approach (Eq.
21) and the non-linear approach (Eq. 22).The results (second, third and fourth columns from the left) show that
the two proposed approaches are generally better than the original one. The linear and non-linear approaches
are comparably. However, the non-linear approach will be adopted in this study since it did not degrade the
performance of PSO-sono on any problem.
xO = 2 ⋅ x̄center − x. (23)
From the above equation, it is clear that COBL uses the knowledge of the whole swarm as in the more
complex FIS scheme but in a simpler way. Using COBL, Eqs. 17 and 18 are replaced with the following
equation:
7
Hence, we are comparing the best solution found so far with its opposite as defined in Eq. 23. Fig. 5 (the
rightmost column) summarizes the results of using the simplified FIS rather than the original FIS in PSO-sono.
The results show that the simpler scheme is at least as good as the original FIS scheme.
To summarize, Fig. 5 shows that using ring topology is the most important change to the original approach
followed by changing the ratio, r, update equation and finally using COBL.
Figure 5: The effect of the major changes on the performance of PSO-sono, "wins/draws/loses" means the
proposed change wins, draws and loses, respectively, against the original PSO-sono.
8
5 Experimental Results
To investigate the performance of the proposed approach, the 12 benchmark functions of the CEC 2022 com-
petition on single objective bound-constrained numerical optimization [KPM+ 21] have first been used. A set of
21 real-world optimization problems have then been used for comparison purposes.
The 12 CEC 2022 test set consists of 12 functions with different characteristics:
• One unimodal function,
• four shifted and rotated functions,
• three hybrid functions that are linear combinations of some functions, and
9
IPSO-sono PSO-sono
on 9 functions, while performing comparably on 2 functions. The only function where PSO-sono performs
better was f8 .
To study the convergence behavior of the two algorithms, a set of six represented functions has been depicted
in Fig. 7. The Figure shows the average minimum objective function obtained by each algorithm for each value
of nf e. It shows that IPSO-sono generally reaches better solutions than PSO-sono.
Another important factor to consider is the diversity of the swarm defined (as in [PTB17]) by:
¿
Á1 P D
DI = Á À ∑ ∑ (xi,j − xj )2 , (25)
P i=1 j=1
where xj is the j-th component of the mean vector of the solutions in the swarm. Fig. 8 depicts the average
diversity of the swarm for the two algorithms on the same six functions shown in Fig. 7
The Figure shows that IPSO-sono maintains the diversity of its swarm for a longer period than PSO-sono,
thus, reducing the possibility of premature convergence.
Table 3: Comparison between the median and best objective function values obtained by IPSO-sono and PSO-
sono on the IEEE CEC 2022 functions.
10
Table 4: Statistical results of IEEE CEC 2022 problems using Wilcoxon’s Test (α = 0.05), where + indicates
that IPSO-sono wins, = indicates that both algorithms are the same and − means that PSO-sono wins. The last
column shows the prediction of Laplace’s Rule that IPSO-sono will win in the next (future) run.
The difference between Telapsed and Tevals , representing the average algorithmic overhead, for D equal to
10 and 20 is shown in Fig. 9. Compared to PSO-sono, IPSO-sono exhibits significantly lower overhead (2.66
ms vs. 4.16 ms for D = 10, and 3.69 ms vs. 5.78 ms for D = 20), resulting in a speedup factor of more than
1.56.
• the Genetic Algorithm with Multi-Parent Crossover (in short, GA-MPC) [ESE11], is the winner of the
IEEE CEC 2011 competition.
• The Improved Rao (I-Rao) [RP22], is a recent variant of the Rao [Rao20] algorithm. The I-Rao has
been compared with 13 approaches on 45 CEC problems and 19 real-world problems. According to the
Authors I-Rao generally outperformed the 13 approaches.
• L-SHADE [TF14], is a very popular and effective DE variant, which ranked first in the IEEE CEC 2014
competition.
• Jaya2 [OI22], which is a very recent, simple and effective variant of the Jaya algorithm [Rao16].
• The Spherical Search (SS) algorithm [KMS+ 19], is a recent optimization algorithm that creates a spherical
boundary and then construct candidate solutions on the surface of that boundary.
Table 5 reports the parameter values used for the above approaches. We followed the recommendations of
the original works when setting these values.
Algorithm Parameters
GA-MPC Population size = 90 and p = 0.1.
IRao N P = 50.
Jaya2 Pmax = 100 and Pmin = 3.
L-SHADE N P = 18 × D, p = 0.11, and number of
historical circle memories is 5.
SS Ninit = 100, p = 0.1, rank = 0.5 × D,
and c = 0.5.
Table 6 summarizes the median objective functions values obtained by the six competing algorithms. Table
7 reports the results of the Wilcoxon’s test. The results show that IPSO-sono outperformed GA-MPC on 6
functions, while being outperformed on 3 problems. Compared with I-Rao, IPSO-sono performed better on 7
11
f1 f4
f5 f6
f10 f12
Figure 7: Best function value curves (averaged across 30 runs) of IPSO-sono and PSO-sono for selected CEC
2022 benchmark functions (D = 20).
functions while performing worse on 3 functions. IPSO-sono performed better than Jaya2 on 5 functions, while
Jaya2 performed better on 4 functions. Compared to SS, the proposed approach performed better on only two
functions. Table 7 shows that L-SHADE is a clear winner on this set of benchmark functions.
12
f1 f4
f5 f6
f10 f12
Figure 8: Diversity curves (averaged across 30 runs) of IPSO-sono and PSO-sono for selected CEC 2022
benchmark functions (D = 20).
Table 6: Comparison between the median objective function values obtained by IPSO-sono and other meta-
heuristics on the IEEE CEC 2022 benchmark functions.
13
Figure 9: Average algorithmic overhead (in seconds) for PSO-sono and IPSO-sono, over increasing dimension-
ality values.
Table 7: Statistical results of the IEEE CEC 2022 using Wilcoxon’s Test (α = 0.05), where + indicates that
IPSO-sono wins, = indicates that both algorithms are the same and − means that other algorithm wins.
First, PSO-sono and IPSO-sono are compared on the IEEE CEC 2011 problems. Table 9 reports the median
and best objective function values by the two approaches. The results of the Wilcoxon’s Test are summarized
in Table 10, which show that IPSO-sono generally outperforms PSO-sono especially when the problem size,
i.e. D, is large. This may suggest that IPSO-sono is more suitable than PSO-sono for problems with higher
dimensions.
Secondly, IPSO-sono is compared with the five state-of-the-art approaches described in Section 5.3. Table
11 reports the median objective function values obtained by the competing approaches. Table 12 summarizes the
statistical test results. The results show that IPSO-sono outperforms both I-Rao and SS (actually SS performs
very poorly on this set of problems). Jaya2 slightly outperforms IPSO-sono (7 wins vs 5). GA-MPC (the winner
of the IEEE CEC 2011 competition) performs better than the proposed approach, which manages to outperform
GA-MPC on 3 problems. L-SHADE confirms its superiority on this set of problems too.
Finally, one interesting case here is the performance of SS. It performs relatively well on the IEEE CEC
2022 but has a very bad performance on the real-world problems. This may suggest that IEEE CEC 2022 test
set is not enough to test the performance of an optimizer. Moreover, as we mentioned from the outset, we believe
that any optimization algorithm should be tested on many real-world problems to validate its performance. A
set of diverse problems like the IEEE CEC 2011 is extremly useful to compare different optimization problems.
14
Table 8: Summary of the CEC 2011 real-world problems.
Table 9: Comparison between the median and best objective function values obtained by IPSO-sono and PSO-
sono on IEEE CEC 2011.
• Ring topology: The original PSO-sono uses the fully-connected topology for its better-particle group.
This often results in premature convergence. Thus, a ring topology is used to slow down the speed of
information exchange between the particles, thus, reducing the likelihood of premature convergence.
• Non-linear ratio, r, reduction: a simpler formula is used to update the parameter r. The formula focuses
on exploration at the beginning of a run while gradually switching to exploitation at the end of the run.
• COBL - a simpler FIS scheme is used, which is based on COBL. The simplified scheme still uses the
knowledge of the whole swarm as in the original FIS scheme but it is much easier to understand and
implement.
Some other minor changes have also been proposed. The proposed approach, called IPSO-sono, was tested
on 12 IEEE CEC 2022 functions and 21 IEEE CEC 2011 real-world problems. The results show that IPSO-sono
15
Table 10: Statistical results of the CEC 2011 problems using Wilcoxon’s Test (α = 0.05), where + indicates that
IPSO-sono wins, = indicates that both algorithms are the same and − means that PSO-sono wins.
Function p-value
T1 1.284505e-01(=)
T2 1.229032e-05(−)
T4 5.478329e-01(=)
T5 2.831363e-02(−)
T6 2.758321e-01(=)
T7 5.132932e-05(−)
T8 1.000000e+00(=)
T9 2.001302e-05(+)
T 10 8.085130e-05(+)
T 11.1 1.229032e-05(+)
T 11.2 1.229032e-05(+)
T 11.3 8.611622e-01(=)
T 11.4 9.042082e-05(−)
T 11.5 5.097549e-01(=)
T 11.6 8.705089e-03(+)
T 11.7 3.260495e-01(=)
T 11.8 6.450980e-05(+)
T 11.9 7.224473e-05(+)
T 11.10 1.259557e-04(+)
T 12 7.366172e-01(=)
T 13 5.271827e-01(=)
+/=/− 8/9/4
Table 11: Comparison between the median objective function values obtained by IPSO-sono and other meta-
heuristics on the IEEE CEC 2011 benchmark problems.
clearly outperforms PSO-sono on most problems. Moreover, it is more efficient, i.e. requiring less computation
time, than the original algorithm.
However, when compared with other meta-heuristics the results are mixed. For example, it generally out-
performs I-Rao and SS, while performing comparably to Jaya2. The performance of IPSO-sono compared to
GA-MPC depends on the problem type. In the case of IEEE CEC 2022, IPSO-sono is generally better. However,
on the IEEE CEC 2011, GA-MPC (the winner of the competition) is much better. L-SHADE, a DE variant, is
clearly the winner on both set of problems. This confirms the findings of [PNP23] where they found that DE
variants generally outperform PSO variants on a wide range of problems.
Future work will explore using IPSO-sono to solve more practical problems (e.g., we are currently investi-
gating its use for color image quantization). A discrete version of IPSO-sono may also be investigated.
16
Table 12: Statistical results of the CEC 2011 problems using Wilcoxon’s Test (α = 0.05), where + indicates that
IPSO-sono wins, = indicates that both algorithms are the same and − means that other algorithm wins.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding
author on reasonable request.
Declarations
Competing Interests The authors declare that they have no conflict of interest.
Authors contribution statement M.O. and H.W. developed the idea. M.O. implemented the proposed ap-
proach. M.O. and M.A. conducted the experiments. M.O. prepared the manuscript. H.W. reviewed the
manuscript and provided feedback.
Ethical and informed consent for data used This article does not contain any studies with human participants
or animals performed by any of the authors.
References
[CJ15] R. Cheng and Y. Jin. A social learning particle swarm optimization algorithm for scalable opti-
mization. Information Sciences, 291:43–60, 2015.
[Cle15] M Clerc. Guided Randomness in Optimization. Wiley, 2015.
[DS10] Swagatam Das and Ponnuthurai N Suganthan. Problem definitions and evaluation criteria for
cec 2011 competition on testing evolutionary algorithms on real world optimization problems.
Jadavpur University, Nanyang Technological University, Kolkata, 2010.
[DSKC19] T. Dokeroglu, E. Sevinc, T. Kucukyilmaz, and A. Cosar. A survey on new generation meta-
heuristic algorithms. Computers Industrial Engineering, 137, 2019.
17
[GL98] Fred Glover and Manuel Laguna. Tabu Search, pages 2093–2229. Springer US, Boston, MA,
1998.
[Gol89] David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning.
Addison-Wesley, New York, 1989.
[HM99] Pierre Hansen and Nenad Mladenović. An Introduction to Variable Neighborhood Search, pages
433–458. Springer US, Boston, MA, 1999.
[HMSCS19] Kashif Hussain, Mohd Najib Mohd Salleh, Shi Cheng, and Yuhui Shi. Metaheuristic research: a
comprehensive survey. Artificial Intelligence Review, 52(4):2191–2233, 2019.
[KE95] J Kennedy and R Eberhart. Particle swarm optimization. In International Joint Conference on
Neural Networks (IJCNN), pages 1942–1948. IEEE, 1995.
[KM02] J Kennedy and R Mendes. Population structure and particle swarm performance. In Congress
on Evolutionary Computation (CEC), volume 2, page 1671–1676. IEEE, 2002.
[KMS+ 19] A Kumar, R Misra, D Singh, S Mishra, and S Das. The spherical search algorithm for bound-
constrained global optimization problems. Appl. Soft Comput., 85, 2019.
[KPM+ 21] A Kumar, K Price, A Mohamed, A Hadi, and P Suganthan. Problem definitions and evaluation
criteria for CEC 2022 competition on single objective bound constrained numerical optimization.
Technical report, 2021.
[LAS18] Nandar Lynn, Mostafa Z Ali, and Ponnuthurai Nagaratnam Suganthan. Population topologies
for particle swarm optimization and differential evolution. Swarm and evolutionary computation,
39:24–35, 2018.
[LQSB06] J.J. Liang, A.K. Qin, P.N. Suganthan, and S. Baskar. Comprehensive learning particle swarm
optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary
Computation, 10(3):281–295, 2006.
[LS17] N. Lynn and P. Suganthan. Ensemble particle swarm optimizer. Applied Soft Computing, 55:533–
548, 2017.
[LZT20] H. Liu, X. Zhang, and L. Tu. A modified particle swarm optimization using adaptive strategy.
Expert Systems with Applications, 152:533–548, 2020.
[MD99] G. Di Caro M. Dorigo. The ant colony optimization meta-heuristic. In F. Glover D. Corne,
M. Dorigo, editor, New Ideas in Optimization, pages 11–32. McGraw Hill, London, 1999.
[MZML22] Z Meng, Y Zhong, G Mao, and Y Liang. PSO-sono: A novel PSO variant for single-objective
numerical optimization. Information Sciences, 586:176–191, 2022.
[NDM+ 12] M. Nasir, S. Das, D Maity, U Sengupta, S amd Halder, and P Suganthan. A dynamic neighbor-
hood learning based particle swarm optimizer for global numerical optimization. Information
Sciences, 209:16–36, 2012.
[OC23] M Omran and M Clerc. Laplace’s rule of succession: a simple and efficient way to compare
metaheuristics. Neural Computing and Applications, 2023.
[OI22] M. Omran and G. Iacca. An improved jaya optimization algorithm with ring topology and
population size reduction. Journal of Intelligent Systems, 31(1):1178–1210, 2022.
[OVRDS+ 21] E. Osaba, E. Villar-Rodriguez, J. Del Ser, A. Nebro, D. Molina, A. LaTorre, P. Suganthan,
C. Coello Coello, and F. Herrera. A tutorial on the design, experimentation and application of
metaheuristic algorithms to real-world optimization problems. Swarm and Evolutionary Com-
putation, 64, 2021.
[PLK+ 22] J. Peng, Y. Li, H. Kang, Y. Shen, X. Sun, and Q. Chen. Impact of population topology on
particle swarm optimization and its variants: An information propagation perspective. Swarm
and Evolutionary Computation, 69, 2022.
18
[PNP23] A. Piotrowski, J. Napiorkowski, and A. Piotrowska. Particle swarm optimization or differential
evolution—a comparison. Engineering Applications of Artificial Intelligence, 121, 2023.
[PTB17] R. Poláková, J. Tvrdik, and P. Bujok. Adaptation of population size according to current popu-
lation diversity in differential evolution. In In Proceedings of the IEEE 2017 Symposium Series
on Computational Intelligence (SSCI), pages 2627–2634. IEEE, 2017.
[Rao16] R. V. Rao. Jaya: A simple and new optimization algorithm for solving constrained and uncon-
strained optimization problems. Int. J. Ind. Eng. Comput., 7(1):19–34, 2016.
[Rao20] R. Rao. Rao algorithms: Three metaphor-less simple algorithms for solving optimization prob-
lems. International Journal of Industrial Engineering Computations, 11(1):107–130, 2020.
[RJF+ 14] S Rahnamayan, J Jesuthasan, Bourennani F, Salehinejad H, and Naterer G. Computing oppo-
sition by involving entire population. In Congress on Evolutionary Computation (CEC), pages
1800–1807. IEEE, 2014.
[RP22] R. V. Rao and R. B. Pawar. Improved rao algorithm: a simple and effective algorithm for con-
strained mechanical design optimization problems. Soft Computing, 27(7):3847–3868, 2022.
[SBN+ 23] Tapas Si, Debolina Bhattacharya, Somen Nayak, Péricles B.C. Miranda, Utpal Nandi, Saurav
Mallik, Ujjwal Maulik, and Hong Qin. Pcobl: A novel opposition-based learning strategy to
improve metaheuristics exploration and exploitation for solving global optimization problems.
IEEE Access, pages 1–1, 2023.
[SD08] Krzysztof Socha and Marco Dorigo. Ant colony optimization for continuous domains. European
journal of operational research, 185(3):1155–1173, 2008.
[SESA+ 22] Tareq M. Shami, Ayman A. El-Saleh, Mohammed Alswaitti, Qasem Al-Tashi, Mhd Amen Sum-
makieh, and Seyedali Mirjalili. Particle swarm optimization: A comprehensive survey. IEEE
Access, 10:10031–10061, 2022.
[SK21] B. Suman and P. Kumar. A survey of simulated annealing as a tool for single and multiobjective
optimization. Journal of the Operational Research Society, 57(10):1143–1160, 2021.
[SP95] Rainer Storn and Kenneth Price. Differential evolution-a simple and efficient adaptive scheme
for global optimization over continuous spaces. Technical report, Berkeley: ICSI, 1995.
[TF14] Ryoji Tanabe and Alex Fukunaga. Improving the search performance of SHADE using linear
population size reduction. In Congress on Evolutionary Computation (CEC), pages 1658–1665.
IEEE, 2014.
[Wil45] Frank Wilcoxon. Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80–83,
1945.
19