0% found this document useful (0 votes)
31 views23 pages

Sasc D 24 00394

Uploaded by

AliYıldız
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views23 pages

Sasc D 24 00394

Uploaded by

AliYıldız
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Systems and Soft Computing

Empirical Analysis and Improvement of the PSO-Sono Optimization Algorithm


--Manuscript Draft--

Manuscript Number: SASC-D-24-00394

Full Title: Empirical Analysis and Improvement of the PSO-Sono Optimization Algorithm

Short Title: Analysis of PSO-Sono Optimization Algorithm

Article Type: Full Length Article

Keywords: Meta-heuristics; Optimization; Particle Swarm Optimization; PSO-sono;


sustainability.

Corresponding Author: Mahamed Omran


Abdullah Al Salem University
Kuwait, KUWAIT

Corresponding Author Secondary


Information:

Corresponding Author's Institution: Abdullah Al Salem University

Corresponding Author's Secondary


Institution:

First Author: Mahamed Omran

First Author Secondary Information:

Order of Authors: Mahamed Omran

Hui Wang

Order of Authors Secondary Information:

Abstract: PSO-sono is a recent and promising variant of the Particle Swarm Optimization (PSO)
algorithm. It outperforms other popular PSO variants on many benchmark test sets. In
this paper, we investigate the performance of PSO-sono on more problems (including
21 real-world optimization problems). Moreover, we propose a new, more powerful yet
simpler and more efficient variant of PSO-sono, called IPSO-sono. The proposed
approach uses ring topology, non-linear ratio reduction and opposition-based learning
to improve the performance of PSO-sono. The proposed approach is compared with
other state-of-the-art meta-heuristic algorithms on 12 IEEE CEC 2022 and 21 real-
world problem defined in the IEEE CEC 2011. The results show that IPSO-sono
outperforms PSO-sono on most problems and performs well compared to other state-
of-the-art approaches.

Suggested Reviewers:

Opposed Reviewers:

Additional Information:

Question Response

Free Preprint Service YES, I want to share my research early and openly as a preprint.

Do you want to share your research early


as a preprint? Preprints allow for open
access to and citations of your research
prior to publication.

Systems and Soft Computing offers a free


service to post your paper on SSRN, an

Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
open access research repository, at the
time of submission. Once on SSRN, your
paper will benefit from early registration
with a DOI and early dissemination that
facilitates collaboration and early citations.
It will be available free to read regardless
of the publication decision made by the
journal. This will have no effect on the
editorial process or outcome with the
journal. Please consult the SSRN Terms
of Use and FAQs.

To complete your submission you must Data will be made available on request.
select a statement which best reflects the
availability of your research data/code.
IMPORTANT: this statement will be
published alongside your article. If you
have selected "Other", the explanation
text will be published verbatim in your
article (online and in the PDF).

(If you have not shared data/code and


wish to do so, you can still return to Attach
Files. Sharing or referencing research
data and code helps other researchers to
evaluate your findings, and increases trust
in your article. Find a list of supported
data repositories in Author Resources,
including the free-to-use multidisciplinary
open Mendeley Data Repository.)

Powered by Editorial Manager® and ProduXion Manager® from Aries Systems Corporation
Cover Letter

Dear Professor,

I am writing to submit a research paper titled " Empirical Analysis and Improvement of the PSO-
Sono Optimization Algorithm" for consideration in your respected Journal. This paper proposes a
new, more powerful yet simpler and more efficient variant of the recent PSO-sono algorithm,
called IPSO-sono.

I have not previously published any of this paper in another journal, nor is it currently under
review elsewhere. All the work contained within the article is our own original creation.

Thank you for taking the time to consider our manuscript for inclusion in your respected journal.

Sincerely yours,

Mahamed G. H. Omran

Professor of Computer Science

College of Computing and Systems

Abdullah Al Salem University

Kuwait

[email protected]
Declaration of Interest Statement

Declaration of interests

☐The authors declare that they have no known competing financial interests or personal relationships
that could have appeared to influence the work reported in this paper.

☒The authors declare the following financial interests/personal relationships which may be considered
as potential competing interests:

I am in the editorial board of this journal. If there are other authors, they declare that they have no
known competing financial interests or personal relationships that could have appeared to influence
the work reported in this paper.
Manuscript Click here to view linked References

Empirical Analysis and Improvement of the PSO-Sono


Optimization Algorithm
Mahamed G. H. Omran1,* and Hui Wang2
1 College
of Computing and Systems, Abdullah Al Salem University, Kuwait,
[email protected], ORCID: 0000-0002-4695-6919
2 School of Information Engineering , Nanchang Institute of Technology, Nanchang 330099,

China, [email protected]
* Corresponding author

Abstract
PSO-sono is a recent and promising variant of the Particle Swarm Optimization (PSO) algorithm. It outper-
forms other popular PSO variants on many benchmark test sets. In this paper, we investigate the performance
of PSO-sono on more problems (including 21 real-world optimization problems). Moreover, we propose a
new, more powerful yet simpler and more efficient variant of PSO-sono, called IPSO-sono. The proposed
approach uses ring topology, non-linear ratio reduction and opposition-based learning to improve the perfor-
mance of PSO-sono. The proposed approach is compared with other state-of-the-art meta-heuristic algorithms
on 12 IEEE CEC 2022 and 21 real-world problem defined in the IEEE CEC 2011. The results show that
IPSO-sono outperforms PSO-sono on most problems and performs well compared to other state-of-the-art
approaches.
keywords— Meta-heuristics; Optimization; Particle Swarm Optimization; PSO-sono; sustainability.

1 Introduction
Consider the following optimization problem:

min f (x) (1)


x∈RD ,lj ≤xj ≤uj ;∀j

where f (x) is the objective function to be minimized, x is a vector of decision variables in RD , and lj and
uj are the lower and upper bounds, respectively, for the j-th component of x. The goal is to find the value of x
that minimizes the objective function f (x) while satisfying the boundary constraints.
Optimization problems arise in various fields of science, engineering, economics, and many other domains,
where the goal is to find the best solution among a set of feasible solutions. Optimization problems can be
categorized as linear or nonlinear, continuous or discrete, and deterministic or stochastic. In general, finding
the global optimal solution for complex optimization problems is a challenging task, as the number of possible
solutions increases exponentially with the problem size.
Meta-heuristics ([Yan14], [DSKC19] and [HMSCS19]) are a class of optimization algorithms that can han-
dle complex optimization problems with multiple local optima, discontinuous objective functions, and con-
straints. Meta-heuristics are iterative search algorithms that use heuristics or approximate methods to guide the
search towards the optimal solution. Unlike exact methods, such as linear programming or dynamic program-
ming, meta-heuristics do not guarantee finding the global optimal solution, but rather aim to find a good solution
within a reasonable time frame.
Meta-heuristics can be divided into two categories: population-based and single-solution-based methods.
Population-based methods, such as Genetic Algorithms (GA) [Gol89], Particle Swarm Optimization (PSO)
[KE95], Differential Evolution (DE) [SP95] and Ant Colony Optimization (ACO) ([MD99] and [SD08]), main-
tain a set of candidate solutions called the population, and iteratively update the population using various opera-
tors such as selection, crossover, mutation, and local search. Single-solution-based methods, such as Simulated

1
Annealing [SK21], Tabu Search [GL98] and Variable Neighborhood Search [HM99], start with a single initial
solution and iteratively update it by moving to a neighboring solution that improves the objective function value.
Meta-heuristics have been applied to various optimization problems in diverse fields, such as engineering
design, transportation, finance, scheduling and bioinformatics [OVRDS+ 21]. They have also inspired the de-
velopment of hybrid and adaptive methods that combine the strengths of different algorithms and adapt their
parameters dynamically based on the problem characteristics.
A recent and promising meta-heuristic is PSO-sono [MZML22], which is a variant of the PSO algorithm.
According to the results reported in [MZML22], PSO-sono generally outperforms other popular PSO-variants
on many benchmark test sets (namely, IEEE CEC 2013, 2014 and 2017) consisting of 88 real-parameter single-
objective optimization functions. In this paper, we investigate the performance of PSO-sono on more problems,
i.e. IEEE CEC 2022 and the 21 IEEE CEC 2011 real-world optimization problems. We believe that testing any
optimizer on real-world problems is essential to validate its performance. Moreover, we propose some changes
to the original algorithm to improve its performance. Another objective is to replace complex operations in
PSO-sono with simpler operations without deteriorating the performance of the optimizer.
Hence, the contributions of this work are
• investigate the performance of PSO-sono on more problems, especially on real-world problem;
• propose major and minor changes to the original algorithm to improve its performance and
• make the PSO-sono easier to implement and simpler to understand by replacing relatively complex oper-
ations with simpler ones without degrading the performance of the algorithm.
The rest of the paper is organized as follows. Section 2 describes the PSO algorithm and its variants.
The PSO-sono algorithm is explained in Section 3. In Section 4, the proposed approach is presented. The
results of the proposed algorithm, in comparison with other approaches from the literature, on the IEEE CEC
2022 benchmark functions and IEEE CEC 2011 real-world problems, are reported and discussed in Section 5.
Finally, Section 6 concludes the paper and highlight possible future work.

2 Particle Swarm Optimization


Particle swarm optimization (PSO) is a population-based stochastic optimization algorithm, inspired by the col-
lective behavior of birds flocking or fish schooling. The algorithm simulates the social behavior of individuals
in a group, who collectively search for the best solution to a problem. In PSO, each potential solution is repre-
sented by a particle, which moves through the search space based on its position and velocity. The movement of
each particle is influenced by its own best-known position and the global best-known position of the population
(a.k.a swarm in the PSO literature).
The PSO algorithm begins with an initial swarm of particles randomly distributed in the search space. The
position and velocity of each particle are updated iteratively based on the following equations:

vi,j (t + 1) = ωvi,j (t) + c1 r1 (pi,j − xi,j (t)) + c2 r2 (gj − xi,j (t)) (2)

xi,j (t + 1) = xi,j (t) + vi,j (t + 1) (3)


where vi,j (t) and xi,j (t) are the velocity and position of the ith particle in the j th dimension at time t,
respectively. ω is the inertia weight, which controls the impact of the previous velocity on the new velocity. c1
and c2 are the acceleration coefficients, and r1 and r2 are random numbers uniformly distributed in the range
[0, 1]. pi,j is the best-known personal position of the ith particle in the j th dimension, and gj is the best-known
position of the swarm in the j th dimension.
The values of pi,j and gj are updated at each iteration, as follows:


⎪xi,j (t + 1) if f (xi,j (t + 1)) < f (pi,j (t))
pi,j (t + 1) = ⎨ (4)

⎪p (t) otherwise
⎩ i,j


⎪pi,j (t + 1) if f (pi,j (t + 1)) < f (gj (t))
gj (t + 1) = ⎨ (5)

⎪ g (t) otherwise
⎩ j
where f (x) is the objective function to be optimized.

2
The algorithm terminates when a stopping criterion is met, such as reaching a maximum number of iterations
or achieving a satisfactory solution. The final solution is the position of the particle with the best-known fitness
value.
PSO has several advantages over other optimization algorithms, such as its simplicity, fast convergence and
ability to handle multimodal and non-linear functions. However, it also has some limitations, such as premature
convergence and sensitivity to parameter settings [Eng07].
Several PSO variants have been proposed recently. One popular variant is the Comprehensive Leaning PSO
(CLPSO) proposed by [LQSB06]. In CLPSO, an exemplar particle is chosen from the swarm and each di-
mension of each particle either learns from the corresponding dimension of the exemplar or the particle’s own
personal best solution. Furthermore, in CLPSO, ω follows a linear reduction from 0.9 to 0.4. An extension
of CLPSO was proposed by [NDM+ 12], which uses a ring topology that changed during the iteration by some
intervals. Furthermore, a regrouping operation of the particles is conducted every few iterations. Rather than
using the global and personal best experience of the swarm, Social Learning PSO (SLPSO) [CJ15] uses a novel
social learning approach where each particle learns from any particle with a better objective function value.
Another PSO variant, Ensemble PSO (EPSO), proposed by [LS17] uses five basic update equations from differ-
ent PSO variants to tackle optimization problems. The EPSO uses a self-adaptive selection process to choose
the most appropriate update equation used by the particles in the larger sub-population in each iteration. The
MPSO algorithm [LZT20] uses a chaos-based non-linear inertia weight to "balance" exploration and exploita-
tion. Furthermore, MPSO employs an adaptive strategy-based update equation to enhance the performance of
PSO when applied to complex optimization problems.
A recent survey of PSO and its variants can be found in [SESA+ 22]. Moreover, [PLK+ 22] investigated the
impact of swarm topology on the performance of PSO and its variants.

3 The PSO-sono Algorithm


In this section, the details of the PSO-sono algorithm is presented.

3.1 The Update Paradigm


The swarm is divided into two sub-swarms, i.e. the better-particle group and the worst-particle group, by sorting
the particles by their fitness values. The size of each group is dynamically changed during each iteration, t. The
ratio, r, is used to denote the percentage of the better-particle group, which is calculated according to
nsb
r= , (6)
nsb + nsw
where nsb and nsw denote the number of success particles in the better-particle group and worse-particle group,
respectively. The ratio, r, is truncated to be within [0.1,0.9].
the better-particle group uses the following update equation,

vi (t + 1) = ω(t) ⋅ vi (t) + c1 (t) ⋅ r1 ⋅ (pi (t) − xi (t)) + c2 (t) ⋅ r2 ⋅ (g(t) − xi (t)), (7)
and
xi (t + 1) = xi (t) + vi (t + 1). (8)
where r1 , r2 ∼ U (0, 1) and ωt , c1,t and c2,t are updated as follows,
t
ω(t) = ωmax − ⋅ (ωmax − ωmin ), (9)
T
t
c1,t = 2.5 − 2 ⋅ , (10)
T
and
t
c2,t = 0.5 + 2 ⋅ (11)
T
where T is the maximum number of iterations. The other worse-particle group uses another update paradigm:

vi,j (t + 1) = r0 ⋅ vi,j (t) + r1 ⋅ Ii,j (t) + r1 ⋅ ϵ ⋅ Ci,j (t), (12)

3
and

xi (t + 1) = xi (t) + vi (t + 1). (13)


where

Ii (t) = xk (t) − xi (t), (14)


and

Ci (t) = x̄center (t) − xi (t). (15)


where r0 ∼ U (0, 1) is generated anew for each parameter in the D-dimensional velocity, r1 ∼ U (0, 1), xk
is a randomly chosen particle with better objective function value than particle i, ϵ denotes the social influence,
ϵ = D/N ⋅ 0.01 and x̄center (t) = N1 ∑N
i=1 xi (t), which is the center of the population.

3.2 The Fully-Informed Search (FIS) Scheme


To address the premature convergence problem of PSO, a fully-informed search scheme is used. In this scheme,
the knowledge of the entire swarm is utilized to help the global best particle, i.e. g, to escape from a local
minimum. The scheme is applied once in each iteration as described below.


⎪g(t) f (g(t)) < f (xFIS (t))
g(t) = ⎨ (16)

⎪x (t) Otherwise
⎩ FIS
where xFIS (t) is computed as follow:


⎪g(t) ⋅ cos(1 − Tt ) + σ if U (0, 1) < 0.5
xFIS (t) = ⎨ (17)

⎪g(t) ⋅ sin(1 − Tt ) + σ Otherwise

where σ is the fully-informed vector calculated as follow:
⎧x̄center (t) = N1 ∑N


⎪ i=1 xi (t)

⎨vari = (xi (t) − x̄center (t)) + ϕ ⊗ (pi (t) − xi (t)) (18)




⎩σ = N1 ∑Ni=1 vari

where ⊗ is the element-wise (i.e. Hadamard product), pi = (∑nk=1 (ϕk ⊗ xnb,k (t))/n) ⊘ ϕ, xnb,k (t) is a
neighbor particle of xi (t) in the ring topology, ⊘ is the element-wise division and ϕ is the acceleration weight,
i.e. ϕ = ∑nk=1 ϕk where ϕk ∼ U ([0, 4.1/n]) and n is the size of the neighborhood, typically set to 2.
The pseudocode of the PSO-sono algorithm is listed in Alg. 1.
The PSO-sono has been compared with the PSO variants discussed in Sec. 2 on a large test set containing all
the functions from CEC 2013, CEC 2014 and CEC 2017. The results show that PSO-sono generally outperforms
the other PSO-variants on most tested problems.

4 The Proposed Approach


In this section, the details of the modifications of PSO-sono are discussed and motivated. First we will discuss
minor changes and fixes that will be incorporated into the proposed major changes. To investigate the effect of
these changes, the set of 12 IEEE CEC 2022 benchmark functions is used. The details of these functions along
with the experimental setup are explained in Section 5.

4.1 Minor Changes


1. In the publicly available Matlab code of PSO-sono 1 , there is a bug. After generating xFIS in Eq. 17
boundary constraint violation is not checked. This bug is fixed in this study.
1 https://sites.google.com/view/zhenyumeng/

4
Algorithm 1: Pseudocode for the PSO-sono Algorithm
input : Solution space [l, u], Velocity [Vmin , Vmax ] and maximum number of function evaluations
nf emax .
output: Number of function evaluations, nf e, best solution g and best fitness value f (g).
1 t ← 1;
2 for i ← 1 to N do
3 Initialize particle i velocity vi (t) ∼ U ([Vmin , Vmax ])D ;
4 Initialize the i-th particle, xi (t) ∼ U ([l, u])D ;
5 Calculate the fitness value f (xi (t)) ;
6 nf e ← N ;
7 Label g(t) and f (g) ;
8 while nf e < nf emax do
9 Sort the swarm in descending order ;
10 Calculate the parameters according to Eqs. 9, 10 and 11 ;
11 Separate it into two groups according to r ;
12 Update particles in the better-particle group according to Eqs. 7 and 8 ;
13 Update particles in the worst-particle group according to Eqs. 12 and 13 ;
14 for i ← 1 to N do
15 Calculate the fitness value f (xi (t)) ;
16 nf e ← nf e + N ;
17 Update the ratio r according to Eq. 6 ;
18 Apply fully-informed search according to Eq. 16 ;
19 nf e ← nf e + 1;
20 Update g(t) and best fitness value f (g(t)). ;

2. Instead of being initialized randomly within the range of [Vmin , Vmax ] as in PSO-sono, the initial velocity
of each particle is set to zero. In real world, the velocity of physical objects in their initial positions is
zero. Particles initialized with non-zero velocities violate this analogy [Eng07]. Moreover, this is not
needed given the fact that particles’ positions are randomly initialized, ensuring random positions and
moving directions [Eng07].
3. Replacing t/T with nf e/nf emax . Most meta-heuristic algorithms iterates until a maximum number of
function evaluations, i.e. nf emax , is reached. Finding the maximum number of iterations, T , is not
always straightforward since some meta-heuristics perform variable number of function evaluations in
each iteration. Thus, replacing T with nf emax is more useful.
The effect of the above minor changes on the performance of PSO-sono is depicted in Fig. 1. The Figure
shows that initializing the velocities of particles to zero allows us to remove two parameters, i.e. Vmin and Vmax
without degrading the performance of PSO-sono as shown in the left column of Fig. 1. The right column shows
that using nf e/nf emax is at least as good as using t/T .

Figure 1: The effect of the minor changes on the performance of PSO-sono, "wins/draws/loses" means the
proposed change wins, draws and loses, respectively, against the original PSO-sono.

5
4.2 Major Changes
After introducing the minor changes and showing their usefulness, we integrate them into the PSO-sono algo-
rithm. In this section, we will introduce some major changes to the Algorithm and show their usefulness.

4.2.1 The Ring Topology


Eq. 7 employs the fully connected topology, as depicted in Figure 2, which enables all particles in the swarm
to share information globally. In this global neighborhood, information is swiftly disseminated to all particles,
resulting in the entire swarm quickly converging to the same search region, which increases the risk of prema-
ture convergence. To address this issue, the ring topology, illustrated in Figure 3, utilizes local neighborhoods,
similar to the one used in the lbest PSO [KM02], in which each particle only communicates with its immediate
neighbors. Consequently, the population requires more iterations to converge, but it may discover better solu-
tions. For instance, in Figure 3, solution x1 has two neighbors, x2 and x6 , while solution x4 has x3 and x5
as its neighbors. As a result, each particle interacts only with its two neighbors (rather than the entire swarm),
which slows down the convergence speed of the proposed approach but reduces the likelihood of premature
convergence. In summary, while the fully connected topology tends to bias the search toward the global best
and adopt an exploitative approach, the ring topology favors exploration over exploitation. This advantage has
made the ring topology quite popular in meta-heuristic research. Recently, Lynn et al. [LAS18] identified the
use of population topologies, including the ring topology, as a key area for research.
Hence, Eq. 7 is replaced by:

vi (t + 1) = ω(t) ⋅ vi (t) + c1 (t) ⋅ r1 ⋅ (pi (t) − xi (t)) + c2 (t) ⋅ r2 ⋅ (gi (t) − xi (t)), (19)
where gi is the best particle in the neighborhood Ni , which is defined as

gi (t + 1) ∈ {νi ∣f (pi (t + 1)) = min{f (x)}, ∀x ∈ Ni }, (20)


with the neighborhood defined as Ni = {pi−1 (t), pi (t), pi+1 (t)}. Notice that the ring topology used in Eq.
18 excludes the particle itself (i.e. xi ) and uses its left and right neighbors (i.e. xi−1 and xi+1 ). Fig. 5 (first
column from the left) compares PSO-sono using the fully-connected topology with PSO-sono using the ring
topology. The results show that using the ring topology outperforms the original PSO-sono on 7 functions, i.e.
it performs better on about 60% of the functions. The fully connected topology performs better on only one
function.

Figure 2: A fully-connected topology with 6 particles.

4.2.2 The Ratio r


Rather than using Eq. 6 to dynamically adjust the ratio r, two simpler alternatives are proposed. The first
approach is to linearly increase the ratio r from 0.1 to 0.9 as follows:
nf e
r = 0.1 + 0.8 ⋅ . (21)
nf emax
The second approach uses a non-linear equation to increase r from 0.1 to 0.9:

6
Figure 3: A ring topology with 6 particles.

nf e
r = 1 − 0.9 ⋅ (0.1/0.9) nf emax . (22)
Thus, the focus will first be on exploration, i.e. 90% of the particles use Eq. 12, then on exploitation, i.e.
90% of the swarm use Eq. 7.
Figure 4 illustrates the difference between the two approaches.
r

Linear
0.8 Non-Linear

0.6

0.4

0.2

nfe
200 400 600 800 1000

Figure 4: Linear vs. non-linear increase of r.

Fig. 5 compares the performance of PSO-sono using the original approach (Eq. 6), the linear approach (Eq.
21) and the non-linear approach (Eq. 22).The results (second, third and fourth columns from the left) show that
the two proposed approaches are generally better than the original one. The linear and non-linear approaches
are comparably. However, the non-linear approach will be adopted in this study since it did not degrade the
performance of PSO-sono on any problem.

4.2.3 The Simplified FIS Scheme


The fully-informed search scheme of the PSO-sono algorithm uses the knowledge of the whole population to
avoid premature convergence. However, the scheme is relatively difficult to understand and implement. A sim-
pler scheme is the Centriod Opposition-Based Learning (COBL) [RJF+ 14], which achieved remarkable success
in Differential Evolution algorithm in every competition [SBN+ 23]. In COBL, an opposite-point (relative to a
point x) is defined as follows,

xO = 2 ⋅ x̄center − x. (23)
From the above equation, it is clear that COBL uses the knowledge of the whole swarm as in the more
complex FIS scheme but in a simpler way. Using COBL, Eqs. 17 and 18 are replaced with the following
equation:

xFIS (t) = 2 ⋅ x̄center (t) − g(t). (24)

7
Hence, we are comparing the best solution found so far with its opposite as defined in Eq. 23. Fig. 5 (the
rightmost column) summarizes the results of using the simplified FIS rather than the original FIS in PSO-sono.
The results show that the simpler scheme is at least as good as the original FIS scheme.
To summarize, Fig. 5 shows that using ring topology is the most important change to the original approach
followed by changing the ratio, r, update equation and finally using COBL.

Figure 5: The effect of the major changes on the performance of PSO-sono, "wins/draws/loses" means the
proposed change wins, draws and loses, respectively, against the original PSO-sono.

4.2.4 The IPSO-sono Algorithm


The final version after incorporating all the aforementioned changes is called the Improved PSO-sono (IPSO-
sono) Algorithm. A pseudocode of the proposed approach is shown in Alg. 2. A Matlab implementation is pub-
licly posted at https://www.mathworks.com/matlabcentral/fileexchange/130364-ipso-sono.

Algorithm 2: Pseudocode for the IPSO-sono Algorithm


input : Solution space [l, u] and maximum number of function evaluations nf emax .
output: Number of function evaluations, nf e, best solution g and best fitness value f (g).
1 t ← 1;
2 for i ← 1 to N do
3 Initialize particle i velocity vi (t) ∼ 0 ;
4 Initialize the i-th particle, xi (t) ∼ U ([l, u])D ;
5 Calculate the fitness value f (xi (t)) ;
6 nf e ← N ;
7 Label g(t) and f (g) ;
8 while nf e < nf emax do
9 Sort the swarm in descending order ;
10 Calculate the parameters according to Eqs. 9, 10 and 11 but using nf e/nf emax (rather than t/T ) ;
11 Separate the swarm into two groups according to r as defined in Eq. 22 ;
12 Update particles in the better-particle group according to Eqs. 19 and 8 ;
13 Update particles in the worst-particle group according to Eqs. 12 and 13 ;
14 for i ← 1 to N do
15 Calculate the fitness value f (xi (t)) ;
16 nf e ← nf e + N ;
17 Apply the simplified FIS using Eq. 24 ;
18 nf e ← nf e + 1;
19 Update g(t) and best fitness value f (g(t)). ;

8
5 Experimental Results
To investigate the performance of the proposed approach, the 12 benchmark functions of the CEC 2022 com-
petition on single objective bound-constrained numerical optimization [KPM+ 21] have first been used. A set of
21 real-world optimization problems have then been used for comparison purposes.
The 12 CEC 2022 test set consists of 12 functions with different characteristics:
• One unimodal function,
• four shifted and rotated functions,
• three hybrid functions that are linear combinations of some functions, and

• four composition functions.


The above functions can be used with different values of D, in this study we set it to 20. The search space
for the functions in the set is [−100, 100]D . The details of these functions can be found in [KPM+ 21]. A brief
description is given in Table 1.
In this study, the competing optimization approaches run for 1,000,000 function evaluations. Each experi-
ment is repeated 30 times, then the pairwise Wilcoxon signed rank test [Wil45] (with α = 5%) is used to validate
the results. Moreover, Laplace’s Rule of Succession is used to compare the different optimization algorithms as
described in [OC23].
All algorithms have been implemented in the Matlab programming language (Matlab R2017b). The pro-
grams have been run on an HP Desktop with 3.6 GHz Intel Core i7 and 32-GB RAM.

Table 1: Summary of the CEC 2022 benchmark functions.

Type Function Description Optimum value


Unimodal Function f1 Shifted and rotated Zakharov 300
Basic Function f2 Shifted and rotated Rosenbrock 400
Basic Function f3 Shifted and rotated expanded Schaffer’s f 6 600
Basic Function f4 Shifted and rotated non-continuous Rastrigin 800
Basic Function f5 Shifted and rotated Levy 900
Hybrid Function f6 Hybrid Function 1 (N = 3) 1800
Hybrid Function f7 Hybrid Function 2 (N = 6) 2000
Hybrid Function f8 Hybrid Function 3 (N = 5) 2200
Composition Function f9 Composition Function 1 (N = 5) 2300
Composition Function f 10 Composition Function 2 (N = 4) 2400
Composition Function f 11 Composition Function 3 (N = 5) 2600
Composition Function f 12 Composition Function 4 (N = 6) 2700

5.1 IPSO-sono vs. PSO-sono


In this section, IPSO-sono is compared against the original PSO-sono algorithm. The parameter settings of these
two algorithms are listed in Table 2. For PSO-sono we used the values suggested by the authors in [MZML22].
First the signatures [Cle15] of the two approaches are generated and depicted in Fig. 6. The two signatures
have been generated using 5 runs, a swarm size of 50 particles and 1000 function evaluations. The signature
of IPSO-sono is more uniform covering the whole search space, while the signature of PSO-sono shows some
biases towards the corners of the search space. Focusing on certain parts of the search space is considered to be
harmful and should be avoided.
Table 3 summarizes the median and minimum objective function values reached by IPSO-sono and PSO-
sono when applied to the IEEE CEC 2022 test set. The results of the Wilcoxon test and Laplace’s Rule of
Succession are reported in Table 4. The results of the two tables show that IPSO-sono outperformed PSO-sono

Algorithm The default parameter settings


IPSO-sono N = 100, iw ∈ [0.4, 0.9] and ring topology
PSO-sono N = 100, iw ∈ [0.4, 0.9], ϵ = D/N ⋅ 0.01, r = 0.5, Vmin = −30, Vmax = 30 and ring topology

Table 2: The default parameter settings of the PSO-sono variants.

9
IPSO-sono PSO-sono

Figure 6: The signatures of IPSO-sono and PSO-sono.

on 9 functions, while performing comparably on 2 functions. The only function where PSO-sono performs
better was f8 .
To study the convergence behavior of the two algorithms, a set of six represented functions has been depicted
in Fig. 7. The Figure shows the average minimum objective function obtained by each algorithm for each value
of nf e. It shows that IPSO-sono generally reaches better solutions than PSO-sono.
Another important factor to consider is the diversity of the swarm defined (as in [PTB17]) by:
¿
Á1 P D
DI = Á À ∑ ∑ (xi,j − xj )2 , (25)
P i=1 j=1

where xj is the j-th component of the mean vector of the solutions in the swarm. Fig. 8 depicts the average
diversity of the swarm for the two algorithms on the same six functions shown in Fig. 7
The Figure shows that IPSO-sono maintains the diversity of its swarm for a longer period than PSO-sono,
thus, reducing the possibility of premature convergence.

Table 3: Comparison between the median and best objective function values obtained by IPSO-sono and PSO-
sono on the IEEE CEC 2022 functions.

Function IPSO-sono Median PSO-sono Median IPSO-sono Best PSO-sono Best


f1 3.000000e+02 3.000000e+02 3.000000e+02 3.000000e+02
f2 4.512388e+02 4.521901e+02 4.000000e+02 4.000000e+02
f3 6.000000e+02 6.000120e+02 6.000000e+02 6.000000e+02
f4 8.129345e+02 8.194017e+02 8.049748e+02 8.119395e+02
f5 9.000000e+02 9.000000e+02 9.000000e+02 9.000000e+02
f6 1.871739e+03 2.710879e+03 1.813307e+03 1.830137e+03
f7 2.025320e+03 2.035596e+03 2.003305e+03 2.023337e+03
f8 2.225003e+03 2.222185e+03 2.222229e+03 2.220813e+03
f9 2.484040e+03 2.485231e+03 2.482070e+03 2.482301e+03
f 10 2.500322e+03 2.500450e+03 2.500239e+03 2.500265e+03
f 11 2.900000e+03 2.900000e+03 2.900000e+03 2.600000e+03
f 12 2.953127e+03 2.967288e+03 2.938418e+03 2.951180e+03

5.2 The algorithmic overhead


In this section, the efficiency of IPSO-sono and PSO-sono are compared. To measure the efficiency, the al-
gorithmic overhead is determined. This overhead is defined as the difference between the time taken by an
optimization algorithm using nf emax function evaluations and the time needed to perform nf emax function
evaluations.
The time required to perform nf emax function evaluations is called Tevals . In this section, Tevals is set to
the average (over 30 runs) time needed to perform 1,000,000 function evaluation for f1 .
The mean time needed by an optimization algorithm to run for 1,000,000 function evaluations is recorded
as Telapsed .

10
Table 4: Statistical results of IEEE CEC 2022 problems using Wilcoxon’s Test (α = 0.05), where + indicates
that IPSO-sono wins, = indicates that both algorithms are the same and − means that PSO-sono wins. The last
column shows the prediction of Laplace’s Rule that IPSO-sono will win in the next (future) run.

Function p-value Laplace’s (%)


f1 1.000000e+00(=) 56
f2 3.160338e-02(+) 63
f3 1.734398e-06(+) 97
f4 7.712174e-04(+) 81
f5 1.953125e-03(+) 47
f6 1.044440e-02(+) 72
f7 3.882182e-06(+) 91
f8 7.690859e-06(−) 16
f9 4.681835e-03(+) 66
f 10 1.956922e-02(+) 78
f 11 6.484375e-01(=) 28
f 12 2.163022e-05(+) 88
+/=/− 9/2/1

The difference between Telapsed and Tevals , representing the average algorithmic overhead, for D equal to
10 and 20 is shown in Fig. 9. Compared to PSO-sono, IPSO-sono exhibits significantly lower overhead (2.66
ms vs. 4.16 ms for D = 10, and 3.69 ms vs. 5.78 ms for D = 20), resulting in a speedup factor of more than
1.56.

5.3 IPSO-sono vs. Other State-of-the-art Approaches


In this section, IPSO-sono is compared with five state-of-the-art methods, namely:

• the Genetic Algorithm with Multi-Parent Crossover (in short, GA-MPC) [ESE11], is the winner of the
IEEE CEC 2011 competition.
• The Improved Rao (I-Rao) [RP22], is a recent variant of the Rao [Rao20] algorithm. The I-Rao has
been compared with 13 approaches on 45 CEC problems and 19 real-world problems. According to the
Authors I-Rao generally outperformed the 13 approaches.
• L-SHADE [TF14], is a very popular and effective DE variant, which ranked first in the IEEE CEC 2014
competition.
• Jaya2 [OI22], which is a very recent, simple and effective variant of the Jaya algorithm [Rao16].
• The Spherical Search (SS) algorithm [KMS+ 19], is a recent optimization algorithm that creates a spherical
boundary and then construct candidate solutions on the surface of that boundary.

Table 5 reports the parameter values used for the above approaches. We followed the recommendations of
the original works when setting these values.

Table 5: Parameter settings for the state-of-the-art approaches.

Algorithm Parameters
GA-MPC Population size = 90 and p = 0.1.
IRao N P = 50.
Jaya2 Pmax = 100 and Pmin = 3.
L-SHADE N P = 18 × D, p = 0.11, and number of
historical circle memories is 5.
SS Ninit = 100, p = 0.1, rank = 0.5 × D,
and c = 0.5.

Table 6 summarizes the median objective functions values obtained by the six competing algorithms. Table
7 reports the results of the Wilcoxon’s test. The results show that IPSO-sono outperformed GA-MPC on 6
functions, while being outperformed on 3 problems. Compared with I-Rao, IPSO-sono performed better on 7

11
f1 f4

f5 f6

f10 f12

Figure 7: Best function value curves (averaged across 30 runs) of IPSO-sono and PSO-sono for selected CEC
2022 benchmark functions (D = 20).

functions while performing worse on 3 functions. IPSO-sono performed better than Jaya2 on 5 functions, while
Jaya2 performed better on 4 functions. Compared to SS, the proposed approach performed better on only two
functions. Table 7 shows that L-SHADE is a clear winner on this set of benchmark functions.

5.4 Real-world Optimization Problems


Any optimization algorithm should be tested on a diverse set of real-world optimization problems to validate
any conclusion regarding its performance. One of the best set of diverse problems is the the 22 2 IEEE CEC
2011 real-world problems [DS10] summarized in Table 8. This set includes problems from different areas,
e.g., chemical systems, TLC devices, spacecraft trajectory. They also have different values for D ranging
from 1 to 216. The constraint-violation handling used is the one provided by the CEC 2011 official Matlab
implementation.
As suggested by [DS10], each run is repeated for 25 runs using nf emax = 150, 000.
2 We did not test our algorithms on T 3 due to its computational cost.

12
f1 f4

f5 f6

f10 f12

Figure 8: Diversity curves (averaged across 30 runs) of IPSO-sono and PSO-sono for selected CEC 2022
benchmark functions (D = 20).

Table 6: Comparison between the median objective function values obtained by IPSO-sono and other meta-
heuristics on the IEEE CEC 2022 benchmark functions.

Function IPSO-sono GA-MPC I-Rao Jaya2 L-SHADE SS


f1 3.00e+02 3.00e+02 3.00e+02 3.00e+02 3.00e+02 3.00e+02
f2 4.51e+02 4.49e+02 4.49e+02 4.49e+02 4.49e+02 4.49e+02
f3 6.00e+02 6.00e+02 6.00e+02 6.00e+02 6.00e+02 6.00e+02
f4 8.13e+02 8.23e+02 8.52e+02 8.16e+02 8.04e+02 8.75e+02
f5 9.00e+02 9.04e+02 9.00e+02 9.00e+02 9.00e+02 9.00e+02
f6 1.88e+03 3.27e+03 2.49e+04 3.34e+03 1.80e+03 1.80e+03
f7 2.02e+03 2.02e+03 2.03e+03 2.02e+03 2.00e+03 2.03e+03
f8 2.23e+03 2.22e+03 2.23e+03 2.23e+03 2.22e+03 2.23e+03
f9 2.48e+03 2.48e+03 2.48e+03 2.48e+03 2.48e+03 2.48e+03
f 10 2.50e+03 2.50e+03 2.50e+03 2.50e+03 2.50e+03 2.50e+03
f 11 2.90e+03 2.90e+03 2.90e+03 2.90e+03 2.90e+03 2.90e+03
f 12 2.95e+03 2.95e+03 2.94e+03 2.94e+03 2.93e+03 2.94e+03

13
Figure 9: Average algorithmic overhead (in seconds) for PSO-sono and IPSO-sono, over increasing dimension-
ality values.

Table 7: Statistical results of the IEEE CEC 2022 using Wilcoxon’s Test (α = 0.05), where + indicates that
IPSO-sono wins, = indicates that both algorithms are the same and − means that other algorithm wins.

Function GA-MPC I-Rao Jaya2 L-SHADE SS


f1 5.96e-05(+) 1.00e+00(=) 1.00e+00(=) 1.00e+00(=) 1.00e+00(=)
f2 1.73e-06(−) 1.49e-05(−) 1.73e-06(−) 1.73e-06(−) 1.73e-06(−)
f3 1.73e-06(+) 1.25e-04(+) 6.23e-04(+) 1.00e+00(=) 5.00e-01(=)
f4 1.02e-05(+) 1.73e-06(+) 5.26e-03(+) 1.73e-06(−) 1.73e-06(+)
f5 2.56e-06(+) 1.16e-05(+) 2.08e-06(+) 1.00e+00(=) 1.00e+00(=)
f6 1.89e-04(+) 1.92e-06(+) 4.07e-05(+) 1.73e-06(−) 1.73e-06(−)
f7 7.19e-01(=) 5.79e-05(+) 3.39e-01(=) 1.73e-06(−) 1.16e-01(=)
f8 1.80e-05(−) 2.35e-06(+) 6.29e-01(=) 1.73e-06(−) 2.70e-02(+)
f9 1.73e-06(−) 1.73e-06(−) 1.73e-06(−) 1.73e-06(−) 1.73e-06(−)
f 10 7.97e-01(=) 3.93e-01(=) 6.42e-03(−) 4.86e-05(−) 4.90e-04(−)
f 11 2.85e-04(+) 3.13e-02(+) 7.81e-03(+) 1.00e+00(=) 1.56e-02(−)
f 12 3.60e-01(=) 5.71e-04(−) 1.13e-05(−) 1.73e-06(−) 1.73e-06(−)
+/=/− 6/3/3 7/2/3 5/3/4 0/4/8 2/4/6

First, PSO-sono and IPSO-sono are compared on the IEEE CEC 2011 problems. Table 9 reports the median
and best objective function values by the two approaches. The results of the Wilcoxon’s Test are summarized
in Table 10, which show that IPSO-sono generally outperforms PSO-sono especially when the problem size,
i.e. D, is large. This may suggest that IPSO-sono is more suitable than PSO-sono for problems with higher
dimensions.
Secondly, IPSO-sono is compared with the five state-of-the-art approaches described in Section 5.3. Table
11 reports the median objective function values obtained by the competing approaches. Table 12 summarizes the
statistical test results. The results show that IPSO-sono outperforms both I-Rao and SS (actually SS performs
very poorly on this set of problems). Jaya2 slightly outperforms IPSO-sono (7 wins vs 5). GA-MPC (the winner
of the IEEE CEC 2011 competition) performs better than the proposed approach, which manages to outperform
GA-MPC on 3 problems. L-SHADE confirms its superiority on this set of problems too.
Finally, one interesting case here is the performance of SS. It performs relatively well on the IEEE CEC
2022 but has a very bad performance on the real-world problems. This may suggest that IEEE CEC 2022 test
set is not enough to test the performance of an optimizer. Moreover, as we mentioned from the outset, we believe
that any optimization algorithm should be tested on many real-world problems to validate its performance. A
set of diverse problems like the IEEE CEC 2011 is extremly useful to compare different optimization problems.

6 Conclusions and Future Work


In this paper we proposed a new variant of PSO-sono that incorporates the following techniques:

14
Table 8: Summary of the CEC 2011 real-world problems.

Problem Description D Constraints


T1 A frequency-modulated sound waves problem 6 Bound-constrained
T2 A Lennard-Jones potential problem 30 Bound-constrained
T3 A bifunctional catalyst blend control problem 1 Bound-constrained
T4 A stirred tank reactor control problem 1 Unconstrained
T5 A Tersoff potential minimization problem 30 Bound-constrained
T6 A Tersoff potential minimization problem 30 Bound-constrained
T7 A radar polyphase code design problem 20 Bound-constrained
T8 A transmission network expansion problem 7 Equality/inequality constraints
T9 A transmission pricing problem 126 Linear equality constraints
T 10 An antenna array design problem 12 Bound-constrained
T 11.1 A dynamic economic dispatch problem 120 Inequality constraints
T 11.2 A dynamic economic dispatch problem 216 Inequality constraints
T 11.3 A static economic dispatch problem 6 Inequality constraints
T 11.4 A static economic dispatch problem 13 Inequality constraints
T 11.5 A static economic dispatch problem 15 Inequality constraints
T 11.6 A static economic dispatch problem 40 Inequality constraints
T 11.7 A static economic dispatch problem 140 Inequality constraints
T 11.8 A hydrothermal scheduling problem 96 Inequality constraints
T 11.9 A hydrothermal scheduling problem 96 Inequality constraints
T 11.10 A hydrothermal scheduling problem 96 Inequality constraints
T 12 A spacecraft trajectory optimization problem 26 Bound-constrained
T 13 A spacecraft trajectory optimization problem 22 Bound-constrained

Table 9: Comparison between the median and best objective function values obtained by IPSO-sono and PSO-
sono on IEEE CEC 2011.

Function IPSO-sono Median PSO-sono Median IPSO-sono Best PSO-sono Best


T1 3.525653e+00 1.137395e+01 3.561404e-05 0.000000e+00
T2 -1.543388e+01 -2.640805e+01 -1.878921e+01 -2.841409e+01
T4 1.394558e+01 1.377076e+01 1.377076e+01 1.377076e+01
T5 -2.949597e+01 -3.191871e+01 -3.471266e+01 -3.556650e+01
T6 -2.126204e+01 -2.300593e+01 -2.647627e+01 -2.916612e+01
T7 1.380087e+00 1.057883e+00 1.035778e+00 7.650146e-01
T8 2.200000e+02 2.200000e+02 2.200000e+02 2.200000e+02
T9 7.345011e+04 1.207066e+05 5.310040e+04 8.197735e+04
T 10 -2.100682e+01 -2.029619e+01 -2.119516e+01 -2.124577e+01
T 11.1 5.298315e+04 2.743252e+06 5.193080e+04 1.247925e+06
T 11.2 2.220234e+07 2.579824e+07 2.136842e+07 2.423546e+07
T 11.3 1.547669e+04 1.547753e+04 1.544700e+04 1.544443e+04
T 11.4 1.893293e+04 1.864120e+04 1.864640e+04 1.851259e+04
T 11.5 3.299777e+04 3.299611e+04 3.291417e+04 3.292042e+04
T 11.6 1.361637e+05 1.380874e+05 1.317670e+05 1.317072e+05
T 11.7 1.947539e+06 1.951093e+06 1.929639e+06 1.913456e+06
T 11.8 9.432657e+05 9.507742e+05 9.369032e+05 9.422754e+05
T 11.9 1.295551e+06 1.687659e+06 1.178849e+06 1.225939e+06
T 11.10 9.429457e+05 9.531398e+05 9.397135e+05 9.435308e+05
T 12 1.829857e+01 1.770570e+01 1.418579e+01 1.428107e+01
T 13 1.966660e+01 2.132887e+01 1.487286e+01 1.561390e+01

• Ring topology: The original PSO-sono uses the fully-connected topology for its better-particle group.
This often results in premature convergence. Thus, a ring topology is used to slow down the speed of
information exchange between the particles, thus, reducing the likelihood of premature convergence.

• Non-linear ratio, r, reduction: a simpler formula is used to update the parameter r. The formula focuses
on exploration at the beginning of a run while gradually switching to exploitation at the end of the run.
• COBL - a simpler FIS scheme is used, which is based on COBL. The simplified scheme still uses the
knowledge of the whole swarm as in the original FIS scheme but it is much easier to understand and
implement.

Some other minor changes have also been proposed. The proposed approach, called IPSO-sono, was tested
on 12 IEEE CEC 2022 functions and 21 IEEE CEC 2011 real-world problems. The results show that IPSO-sono

15
Table 10: Statistical results of the CEC 2011 problems using Wilcoxon’s Test (α = 0.05), where + indicates that
IPSO-sono wins, = indicates that both algorithms are the same and − means that PSO-sono wins.

Function p-value
T1 1.284505e-01(=)
T2 1.229032e-05(−)
T4 5.478329e-01(=)
T5 2.831363e-02(−)
T6 2.758321e-01(=)
T7 5.132932e-05(−)
T8 1.000000e+00(=)
T9 2.001302e-05(+)
T 10 8.085130e-05(+)
T 11.1 1.229032e-05(+)
T 11.2 1.229032e-05(+)
T 11.3 8.611622e-01(=)
T 11.4 9.042082e-05(−)
T 11.5 5.097549e-01(=)
T 11.6 8.705089e-03(+)
T 11.7 3.260495e-01(=)
T 11.8 6.450980e-05(+)
T 11.9 7.224473e-05(+)
T 11.10 1.259557e-04(+)
T 12 7.366172e-01(=)
T 13 5.271827e-01(=)
+/=/− 8/9/4

Table 11: Comparison between the median objective function values obtained by IPSO-sono and other meta-
heuristics on the IEEE CEC 2011 benchmark problems.

Function IPSO-sono GA-MPC I-Rao Jaya2 L-SHADE SS


T1 1.27e+00 0.00e+00 9.06e+00 0.00e+00 1.68e-21 1.39e+01
T2 -1.55e+01 -2.64e+01 -1.09e+01 -2.65e+01 -2.60e+01 -9.01e+00
T4 1.40e+01 1.40e+01 1.38e+01 1.38e+01 1.43e+01 2.11e+01
T5 -2.77e+01 -3.43e+01 -2.21e+01 -3.16e+01 -3.65e+01 -2.06e+01
T6 -2.28e+01 -2.30e+01 -1.73e+01 -2.30e+01 -2.92e+01 -1.53e+01
T7 1.38e+00 9.22e-01 1.78e+00 1.67e+00 1.20e+00 2.22e+00
T8 2.20e+02 2.20e+02 2.20e+02 2.20e+02 2.20e+02 2.92e+02
T9 7.43e+04 4.96e+04 3.03e+03 1.52e+03 2.47e+03 3.25e+06
T 10 -2.11e+01 -2.14e+01 -1.47e+01 -2.13e+01 -2.16e+01 -1.54e+01
T 11.1 5.27e+04 5.26e+04 5.26e+04 5.23e+04 5.20e+04 6.20e+06
T 11.2 2.24e+07 2.16e+07 1.76e+07 1.76e+07 1.78e+07 5.89e+07
T 11.3 1.55e+04 1.54e+04 1.55e+04 1.55e+04 1.54e+04 1.57e+04
T 11.4 1.90e+04 1.85e+04 1.92e+04 1.91e+04 1.81e+04 2.06e+05
T 11.5 3.30e+04 3.30e+04 3.28e+04 3.28e+04 3.27e+04 1.71e+06
T 11.6 1.37e+05 1.36e+05 1.35e+05 1.35e+05 1.24e+05 9.51e+05
T 11.7 1.95e+06 2.04e+06 2.12e+06 1.98e+06 1.85e+06 1.86e+10
T 11.8 9.43e+05 9.75e+05 1.62e+06 9.98e+05 9.31e+05 1.55e+08
T 11.9 1.32e+06 1.22e+06 1.98e+06 1.38e+06 9.39e+05 1.56e+08
T 11.10 9.43e+05 9.70e+05 1.58e+06 1.02e+06 9.32e+05 1.60e+08
T 12 1.82e+01 1.62e+01 1.84e+01 1.46e+01 1.61e+01 4.24e+01
T 13 1.94e+01 2.08e+01 1.58e+01 2.10e+01 1.43e+01 3.99e+01

clearly outperforms PSO-sono on most problems. Moreover, it is more efficient, i.e. requiring less computation
time, than the original algorithm.
However, when compared with other meta-heuristics the results are mixed. For example, it generally out-
performs I-Rao and SS, while performing comparably to Jaya2. The performance of IPSO-sono compared to
GA-MPC depends on the problem type. In the case of IEEE CEC 2022, IPSO-sono is generally better. However,
on the IEEE CEC 2011, GA-MPC (the winner of the competition) is much better. L-SHADE, a DE variant, is
clearly the winner on both set of problems. This confirms the findings of [PNP23] where they found that DE
variants generally outperform PSO variants on a wide range of problems.
Future work will explore using IPSO-sono to solve more practical problems (e.g., we are currently investi-
gating its use for color image quantization). A discrete version of IPSO-sono may also be investigated.

16
Table 12: Statistical results of the CEC 2011 problems using Wilcoxon’s Test (α = 0.05), where + indicates that
IPSO-sono wins, = indicates that both algorithms are the same and − means that other algorithm wins.

Function GA-MPC I-Rao Jaya2 L-SHADE SS


T1 6.38e-01(=) 2.30e-02(+) 6.96e-01(=) 5.76e-05(−) 2.26e-05(+)
T2 1.23e-05(−) 8.09e-05(+) 1.23e-05(−) 1.23e-05(−) 1.23e-05(+)
T4 9.09e-01(=) 6.89e-01(=) 1.37e-01(=) 9.63e-04(+) 1.23e-05(+)
T5 2.26e-05(−) 1.23e-05(+) 1.57e-03(−) 1.23e-05(−) 1.23e-05(+)
T6 6.96e-01(=) 1.23e-05(+) 3.26e-01(=) 1.23e-05(−) 1.23e-05(+)
T7 1.77e-05(−) 1.23e-05(+) 5.35e-03(+) 1.94e-04(−) 1.23e-05(+)
T8 1.00e+00(=) 1.00e+00(=) 1.00e+00(=) 1.00e+00(=) 1.18e-05(+)
T9 8.27e-02(=) 1.23e-05(−) 1.23e-05(−) 1.23e-05(−) 1.23e-05(+)
T 10 8.04e-03(−) 4.57e-05(+) 4.76e-01(=) 1.23e-05(−) 1.23e-05(+)
T 11.1 1.83e-01(=) 5.45e-01(=) 6.93e-02(=) 1.28e-02(−) 1.23e-05(+)
T 11.2 2.40e-04(−) 1.23e-05(−) 1.23e-05(−) 1.23e-05(−) 1.23e-05(+)
T 11.3 2.40e-04(−) 2.26e-03(+) 7.37e-01(=) 4.07e-05(−) 1.23e-05(+)
T 11.4 1.23e-05(−) 1.94e-04(+) 1.10e-02(+) 1.23e-05(−) 1.23e-05(+)
T 11.5 8.71e-03(−) 1.23e-05(−) 1.23e-05(−) 1.23e-05(−) 1.23e-05(+)
T 11.6 3.00e-01(=) 8.27e-02(=) 6.31e-03(−) 1.23e-05(−) 1.23e-05(+)
T 11.7 8.09e-05(+) 2.66e-04(+) 7.42e-03(+) 1.23e-05(−) 1.23e-05(+)
T 11.8 5.45e-04(+) 1.23e-05(+) 1.40e-04(+) 1.23e-05(−) 1.23e-05(+)
T 11.9 1.73e-02(−) 1.23e-05(+) 6.00e-01(=) 1.23e-05(−) 1.23e-05(+)
T 11.10 1.23e-05(+) 1.23e-05(+) 1.23e-05(+) 1.23e-05(−) 1.23e-05(+)
T 12 2.16e-04(−) 7.57e-01(=) 1.39e-05(−) 3.62e-05(−) 1.23e-05(+)
T 13 7.37e-01(=) 1.73e-02(−) 7.57e-01(=) 1.77e-05(−) 1.23e-05(+)
+/=/− 3/8/10 12/5/4 5/9/7 1/1/19 21/0/0

Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding
author on reasonable request.

Declarations
Competing Interests The authors declare that they have no conflict of interest.
Authors contribution statement M.O. and H.W. developed the idea. M.O. implemented the proposed ap-
proach. M.O. and M.A. conducted the experiments. M.O. prepared the manuscript. H.W. reviewed the
manuscript and provided feedback.
Ethical and informed consent for data used This article does not contain any studies with human participants
or animals performed by any of the authors.

References
[CJ15] R. Cheng and Y. Jin. A social learning particle swarm optimization algorithm for scalable opti-
mization. Information Sciences, 291:43–60, 2015.
[Cle15] M Clerc. Guided Randomness in Optimization. Wiley, 2015.

[DS10] Swagatam Das and Ponnuthurai N Suganthan. Problem definitions and evaluation criteria for
cec 2011 competition on testing evolutionary algorithms on real world optimization problems.
Jadavpur University, Nanyang Technological University, Kolkata, 2010.
[DSKC19] T. Dokeroglu, E. Sevinc, T. Kucukyilmaz, and A. Cosar. A survey on new generation meta-
heuristic algorithms. Computers Industrial Engineering, 137, 2019.

[Eng07] A Engelbrecht. Computational Intelligence: An Introduction. Wiley, 2007.


[ESE11] Saber M Elsayed, Ruhul A Sarker, and Daryl L Essam. GA with a new multi-parent crossover for
solving ieee-cec2011 competition problems. In Congress on Evolutionary Computation (CEC),
pages 1034–1040. IEEE, 2011.

17
[GL98] Fred Glover and Manuel Laguna. Tabu Search, pages 2093–2229. Springer US, Boston, MA,
1998.
[Gol89] David E. Goldberg. Genetic Algorithms in Search, Optimization, and Machine Learning.
Addison-Wesley, New York, 1989.
[HM99] Pierre Hansen and Nenad Mladenović. An Introduction to Variable Neighborhood Search, pages
433–458. Springer US, Boston, MA, 1999.
[HMSCS19] Kashif Hussain, Mohd Najib Mohd Salleh, Shi Cheng, and Yuhui Shi. Metaheuristic research: a
comprehensive survey. Artificial Intelligence Review, 52(4):2191–2233, 2019.
[KE95] J Kennedy and R Eberhart. Particle swarm optimization. In International Joint Conference on
Neural Networks (IJCNN), pages 1942–1948. IEEE, 1995.
[KM02] J Kennedy and R Mendes. Population structure and particle swarm performance. In Congress
on Evolutionary Computation (CEC), volume 2, page 1671–1676. IEEE, 2002.
[KMS+ 19] A Kumar, R Misra, D Singh, S Mishra, and S Das. The spherical search algorithm for bound-
constrained global optimization problems. Appl. Soft Comput., 85, 2019.
[KPM+ 21] A Kumar, K Price, A Mohamed, A Hadi, and P Suganthan. Problem definitions and evaluation
criteria for CEC 2022 competition on single objective bound constrained numerical optimization.
Technical report, 2021.
[LAS18] Nandar Lynn, Mostafa Z Ali, and Ponnuthurai Nagaratnam Suganthan. Population topologies
for particle swarm optimization and differential evolution. Swarm and evolutionary computation,
39:24–35, 2018.
[LQSB06] J.J. Liang, A.K. Qin, P.N. Suganthan, and S. Baskar. Comprehensive learning particle swarm
optimizer for global optimization of multimodal functions. IEEE Transactions on Evolutionary
Computation, 10(3):281–295, 2006.
[LS17] N. Lynn and P. Suganthan. Ensemble particle swarm optimizer. Applied Soft Computing, 55:533–
548, 2017.
[LZT20] H. Liu, X. Zhang, and L. Tu. A modified particle swarm optimization using adaptive strategy.
Expert Systems with Applications, 152:533–548, 2020.
[MD99] G. Di Caro M. Dorigo. The ant colony optimization meta-heuristic. In F. Glover D. Corne,
M. Dorigo, editor, New Ideas in Optimization, pages 11–32. McGraw Hill, London, 1999.
[MZML22] Z Meng, Y Zhong, G Mao, and Y Liang. PSO-sono: A novel PSO variant for single-objective
numerical optimization. Information Sciences, 586:176–191, 2022.
[NDM+ 12] M. Nasir, S. Das, D Maity, U Sengupta, S amd Halder, and P Suganthan. A dynamic neighbor-
hood learning based particle swarm optimizer for global numerical optimization. Information
Sciences, 209:16–36, 2012.
[OC23] M Omran and M Clerc. Laplace’s rule of succession: a simple and efficient way to compare
metaheuristics. Neural Computing and Applications, 2023.
[OI22] M. Omran and G. Iacca. An improved jaya optimization algorithm with ring topology and
population size reduction. Journal of Intelligent Systems, 31(1):1178–1210, 2022.
[OVRDS+ 21] E. Osaba, E. Villar-Rodriguez, J. Del Ser, A. Nebro, D. Molina, A. LaTorre, P. Suganthan,
C. Coello Coello, and F. Herrera. A tutorial on the design, experimentation and application of
metaheuristic algorithms to real-world optimization problems. Swarm and Evolutionary Com-
putation, 64, 2021.
[PLK+ 22] J. Peng, Y. Li, H. Kang, Y. Shen, X. Sun, and Q. Chen. Impact of population topology on
particle swarm optimization and its variants: An information propagation perspective. Swarm
and Evolutionary Computation, 69, 2022.

18
[PNP23] A. Piotrowski, J. Napiorkowski, and A. Piotrowska. Particle swarm optimization or differential
evolution—a comparison. Engineering Applications of Artificial Intelligence, 121, 2023.
[PTB17] R. Poláková, J. Tvrdik, and P. Bujok. Adaptation of population size according to current popu-
lation diversity in differential evolution. In In Proceedings of the IEEE 2017 Symposium Series
on Computational Intelligence (SSCI), pages 2627–2634. IEEE, 2017.

[Rao16] R. V. Rao. Jaya: A simple and new optimization algorithm for solving constrained and uncon-
strained optimization problems. Int. J. Ind. Eng. Comput., 7(1):19–34, 2016.
[Rao20] R. Rao. Rao algorithms: Three metaphor-less simple algorithms for solving optimization prob-
lems. International Journal of Industrial Engineering Computations, 11(1):107–130, 2020.

[RJF+ 14] S Rahnamayan, J Jesuthasan, Bourennani F, Salehinejad H, and Naterer G. Computing oppo-
sition by involving entire population. In Congress on Evolutionary Computation (CEC), pages
1800–1807. IEEE, 2014.
[RP22] R. V. Rao and R. B. Pawar. Improved rao algorithm: a simple and effective algorithm for con-
strained mechanical design optimization problems. Soft Computing, 27(7):3847–3868, 2022.

[SBN+ 23] Tapas Si, Debolina Bhattacharya, Somen Nayak, Péricles B.C. Miranda, Utpal Nandi, Saurav
Mallik, Ujjwal Maulik, and Hong Qin. Pcobl: A novel opposition-based learning strategy to
improve metaheuristics exploration and exploitation for solving global optimization problems.
IEEE Access, pages 1–1, 2023.
[SD08] Krzysztof Socha and Marco Dorigo. Ant colony optimization for continuous domains. European
journal of operational research, 185(3):1155–1173, 2008.
[SESA+ 22] Tareq M. Shami, Ayman A. El-Saleh, Mohammed Alswaitti, Qasem Al-Tashi, Mhd Amen Sum-
makieh, and Seyedali Mirjalili. Particle swarm optimization: A comprehensive survey. IEEE
Access, 10:10031–10061, 2022.

[SK21] B. Suman and P. Kumar. A survey of simulated annealing as a tool for single and multiobjective
optimization. Journal of the Operational Research Society, 57(10):1143–1160, 2021.
[SP95] Rainer Storn and Kenneth Price. Differential evolution-a simple and efficient adaptive scheme
for global optimization over continuous spaces. Technical report, Berkeley: ICSI, 1995.
[TF14] Ryoji Tanabe and Alex Fukunaga. Improving the search performance of SHADE using linear
population size reduction. In Congress on Evolutionary Computation (CEC), pages 1658–1665.
IEEE, 2014.
[Wil45] Frank Wilcoxon. Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80–83,
1945.

[Yan14] Xin-She Yang. Nature-inspired optimization algorithms. Elsevier, 2014.

19

You might also like