0% found this document useful (0 votes)
8 views

A Multiobjective Particle Swarm Optimization Algor

The document presents a new multiobjective particle swarm optimization algorithm called GTMSMOPSO, which combines grid techniques and multistrategy to improve convergence and diversity in solving complex optimization problems. The algorithm utilizes two evaluation index strategies and a variation operation to enhance particle search capabilities and maintain population diversity. Simulation results demonstrate that GTMSMOPSO outperforms several existing multiobjective particle swarm algorithms in terms of convergence and distribution.

Uploaded by

Sung Woong Ki
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

A Multiobjective Particle Swarm Optimization Algor

The document presents a new multiobjective particle swarm optimization algorithm called GTMSMOPSO, which combines grid techniques and multistrategy to improve convergence and diversity in solving complex optimization problems. The algorithm utilizes two evaluation index strategies and a variation operation to enhance particle search capabilities and maintain population diversity. Simulation results demonstrate that GTMSMOPSO outperforms several existing multiobjective particle swarm algorithms in terms of convergence and distribution.

Uploaded by

Sung Woong Ki
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Hindawi

Journal of Mathematics
Volume 2021, Article ID 1626457, 17 pages
https://doi.org/10.1155/2021/1626457

Research Article
A Multiobjective Particle Swarm Optimization Algorithm
Based on Grid Technique and Multistrategy

Kangge Zou,1 Yanmin Liu ,2 Shihua Wang,1 Nana Li ,3 and Yaowei Wu4
1
School of Mathematics and Statistics, Guizhou University, Guiyang 550025, China
2
Zunyi Normal University, Zunyi 563002, China
3
School of Data Science and Information Engineering, Guizhou Minzu University, Guiyang 550025, China
4
School of Mathematics and Computational Statistics, Wuyi University, Jiangmen 529000, China

Correspondence should be addressed to Yanmin Liu; [email protected]

Received 24 September 2021; Accepted 12 November 2021; Published 7 December 2021

Academic Editor: Nan-Jing Huang

Copyright © 2021 Kangge Zou et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

When faced with complex optimization problems with multiple objectives and multiple variables, many multiobjective particle
swarm algorithms are prone to premature convergence. To enhance the convergence and diversity of the multiobjective particle
swarm algorithm, a multiobjective particle swarm optimization algorithm based on the grid technique and multistrategy
(GTMSMOPSO) is proposed. The algorithm randomly uses one of two different evaluation index strategies (convergence
evaluation index and distribution evaluation index) combined with the grid technique to enhance the diversity and convergence of
the population and improve the probability of particles flying to the real Pareto front. A combination of grid technology and a
mixed evaluation index strategy is used to maintain the external archive to avoid removing particles with better convergence based
only on particle density, which leads to population degradation and affects the particle exploitation ability. At the same time, a
variation operation is proposed to avoid rapid degradation of the population, which enhances the particle search capability. The
simulation results show that the proposed algorithm has better convergence and distribution than CMOPSO, NSGAII, MOEAD,
MOPSOCD, and NMPSO.

1. Introduction be solved in polynomial time by classical algorithms, such as


simplex algorithm and conjugate gradient method, or even
Most of today’s scientific and engineering problems are cannot be solved effectively with the model. With the de-
characterized by the fact that they usually have multiple velopment of information technology, swarm intelligence
conflicting objectives [1], and decision makers need to si- algorithms are widely used, and such algorithms simulate
multaneously optimize multiple objectives as best as possible the centralized learning process of a group composed of
within a given range, namely, multiobjective optimization individuals. Such algorithms can effectively overcome the
problems (MOPs). The optimization result of such problems models that cannot be solved by classical algorithms. As a
is not single, and there exists a Pareto optimal solution set kind of swarm intelligence algorithm, particle swarm op-
consisting of a set of compromise solutions [2]. The goal of timization (PSO) algorithm has the characteristics of simple
solving such problems is to obtain well-distributed Pareto operation and fast convergence and has good solution po-
fronts in the objective space [3–6]. tential in solving MOPs.
With the expansion of human existence and the wid- When PSO deals with MOPs, it is called multiobjective
ening of the scope of understanding and transforming the particle swarm optimization (MOPSO). PSO needs to solve
world, the complex optimization problems encountered in at least three problems when dealing with MOPs. The first
reality are characterized by complex, multipolar, nonlinear, problem is how to preserve a set of noninferior solutions
strongly constrained, and difficult modeling, which cannot generated by the algorithm at each iteration, especially when
2 Journal of Mathematics

the number of iterations is large, and how to control the evaluation index strategies is randomly used to select the
number of solutions to ensure that the noninferior solutions global optimal sample based on the grid technique; and (3)
obtained by the algorithm can meet the solution quality to further increase the population diversity, a variation
requirements. The second problem is how to select the operation strategy is used to vary the particle positions. The
learning samples of particles among the many noninferior simulation experimental results show that the proposed
solutions. Since there is no truly optimal solution in MOPs, algorithm has certain advantages over the other five mul-
the global optimal particle and the individual optimal tiobjective particle swarm algorithms.
particle of MOPSO need to be selected by a specific strategy. The rest of this paper is organized as follows. Section 2
The third problem is how to maintain the population di- introduces the multiobjective optimization problem, related
versity and prevent the population from falling into local work, and the concept of particle swarm algorithm. Section 3
optimal solutions. Due to the faster convergence speed of presents the details of the algorithm in this paper. Section 4
MOPSO, the resulting set of noninferior solutions is prone shows the comparison results and analysis of GTMSMOPSO
to lose diversity rapidly during the algorithm learning with other 5 multiobjective particle swarm algorithms. Fi-
process. nally, the conclusion of the algorithm in this paper is given in
The problems of maintaining the size of external ar- Section 5.
chives, selecting learning samples of particles among many
noninferior solutions, maintaining the diversity of the
2. Background
population, and preventing the population from falling into
local optimal solutions are the core of MOPSO. In [7], 2.1. Multiobjective Optimization Problem. The definition of
MOPSOhv, a new hypervolume-based multiobjective par- the general multiobjective optimization problem is de-
ticle swarm optimizer, is proposed. The algorithm uses the scribed as follows:
hypervolume contribution of the archiving solution to select
global and individual leaders for each particle in the master min F(x) � f1 (x), f2 (x), . . . , fk (x)􏼁 ,
cluster and is used to update the noninferior solution of the s.t. gi (x) ≤ 0, i � 1, 2, . . . , m, (1)
external archive. To increase the diversity of particles, the
algorithm introduces a mutation operator. In [8], R2- hj (x) � 0, j � 1, 2, . . . , p,
MOPSO, a multiobjective particle swarm optimizer based on
where x � (x1 , x2 , . . . , xn )T is the n-dimensional decision
R2 indicator and decomposition, is proposed. The algorithm
vector, fi : Rn ⟶ R, i � 1, 2, . . . , k is the objective function,
uses one strategy of R2 contribution rate to select the global
and gi , hj : Rn ⟶ R, i � 1, 2, . . . , m, j � 1, 2, . . . , p is the
best leader and update the individual best leader. Also, the
inequality constraint and the equality constraint,
algorithm uses an elite learning strategy and a Gaussian
respectively.
learning strategy to improve the diversity of particles. In [9],
Definition of multiobjective optimization is given as
balancing convergence and diversity is a key problem in
follows.
high-dimensional target spaces, and handling many objec-
tive optimization problems with R2 indicator and decom-
position-based particle swarm optimizer is proposed to solve Definition 1 (Pareto dominance). A vector
this problem. To balance convergence and diversity, a two- m � (m1 , m2 , . . . , mk ) is dominant n � (n1 , n2 , . . . , nk ), if
level archiving maintenance method based on r2 metrics and and only if
a target space decomposition strategy are designed. The ∀i ∈ {1, 2, . . . , k}, mi ≤ ni ∧ ∃i ∈ {1, 2, . . . , k}, mi < ni .
algorithm selects the best global leader based on the r2
metric, while the selection of the best individual leader is (2)
based on Pareto dominance. Also, the target space de-
composition leader selection used feedback from a two-level
profile. A new velocity update method improves the ex- Definition 2 (Pareto optimal). A solution x′ ∈ Ω is a Pareto
ploitation ability of particles. Also, an elite learning strategy optimal or a noninferior solution, if and only if
and an intelligent Gaussian learning strategy are embedded ∃x ∈ Ω, x < x′ . (3)
in R2-MaPSO to improve the ability of particles to jump out
of local optimal solutions.
The current improved MOPSO improves the conver-
gence and diversity of MOPSO to a certain extent, but there Definition 3 (Pareto optimal solution set). The set S in-
are still shortcomings. To further improve the performance cluding all Pareto optimal solutions is a Pareto optimal
of MOPSO, this paper proposes a multiobjective particle solution set, which is defined as
swarm algorithm based on grid technology and multi-
S � 􏼚x′ ∈ Ω|∃x∗ ∈ Ω: F x∗ 􏼁 < F((x )􏼛. (4)
strategy. The main improvements to MOPSO are as follows: ′
(1) for the maintenance of external archives, a grid technique
and mixed evaluation index are used to remove noninferior
solutions in the external archives to improve the quality of Definition 4 (Pareto frontier). The set PF of all objective
candidate solutions; (2) to enhance the diversity and con- function values with the Pareto optimal solution set as the
vergence of the population, one of the two different feasible region is called the Pareto frontier and is defined as
Journal of Mathematics 3

PF � 􏼈y � f1 (x), f2 (x), . . . , fk (x)􏼁|x ∈ S􏼉. (5) performance of the particle swarm algorithm. In [15], a self-
organized speciation based multiobjective particle swarm
optimizer for multimodal multiobjective problems is
proposed, which uses a species formation strategy to es-
2.2. Related Work. PSO is a kind of population intelligence tablish multiple stable ecological niches to increase the
optimization algorithm with simple operation and fast probability of species flying to the true Pareto front. In [16],
convergence, which is widely used in single objective op- a simplified multiobjective particle swarm optimization
timization problems. It is the simplicity of operation and fast algorithm was proposed. The algorithm uses an adaptive
convergence of particle swarm algorithm that has attracted penalty mechanism for the PBI parameters, which can
increasing scholars to study it. Since MOPs have no truly adaptively adjust the penalty value to enhance the selection
optimal solutions, MOPSO generally selects the global op- pressure of the archive and improve the selection pressure
timal samples in the set of noninferior solutions, and because of each particle. In [17], multiobjective reservoir operation
MOPSO has a fast convergence rate in the learning process, using particle swarm optimization with adaptive random
it is easy to make the obtained set of noninferior solutions inertia weights is proposed, which combines the ARIW
lose diversity rapidly, leading to a fall into local optimal algorithm with the traditional PSO, while using a triangular
solutions. probability density function, randomly generates inertia
In order to solve the above problems, increasing weights, and automatically adjusts the probability distri-
scholars have proposed many improvement methods for bution function as it evolves. In [18], based on penalty
MOPSO. Coello et al. [10] proposed a multiobjective function theory and particle swarm optimization (PSO)
particle swarm optimization algorithm based on adaptive algorithm, an improved multiobjective particle swarm
grid, which uses adaptive grid technology to maintain optimization algorithm based on archive management is
external archives, guide particle updates, and implement proposed, while multiple swarm coevolution of crowded
mutations on the particle and particles flight area. Although distance archive management is used to improve the search
the algorithm shows some advantages over traditional ability and diversity of the population.
multiobjective evolutionary algorithms for MOPs, it has Most of the current improved MOPSO rely on only one
difficulties in solving complex MOPs with multiple local strategy to select the learning samples of particles and
fronts and poor diversity of nondominated solutions. Li maintain external archives, without considering the perfor-
et al. [11] proposed grid search-based multipopulation mance of particles at different stages, resulting in insufficient
particle swarm optimization algorithm for multimodal convergence and diversity of the algorithms when solving
multiobjective optimization, which uses a multiple cluster complex MOPs. In this paper, a multiobjective particle swarm
algorithm based on the k-means clustering method to algorithm based on grid technology and multistrategy is
locate more equivalent PSs in the decision space and uses a proposed, and experimental simulations show that the al-
grid in GSMPSO-MM to explore high-quality solutions in gorithm effectively improves the performance of MOPSO.
the decision space, but the algorithm still suffers from
insufficient convergence.
It is different from the abovementioned MOPSO which 2.3. Particle Swarm Optimization. The particle swarm op-
determines the search mechanism through the dominance timization [19] is an optimization algorithm based on an
relationship. In [12], a multiobjective particle swarm al- iterative model, proposed by Dr. Eberhart and Dr. Ken-
gorithm with random migration is proposed. The algorithm nedy in 1995, and originated from the study of the be-
sets an age threshold. When a particle does not improve its havior of bird flocks foraging. The group searches for the
individual position, it will increase its age, and when the global optimal solution within a range, and each particle
particle age exceeds this age threshold, the particles will be has a fitness and speed to adjust its flight direction. During
reinitialized to increase the diversity of the population and the flight, all particles in the group have a memory
avoid falling into local extremes. However, because de- function, and each particle continuously learns from its
composition replaces the dominant relationship, the al- own optimal position and the optimal particle position in
gorithm cannot cover the entire Pareto frontier when the group. The particle velocity and position update for-
solving some complex MOPs. To solve this problem, Han mula are as follows:
et al. [13] proposed multiobjective particle swarm opti-
mization with adaptive strategy for feature selection, which v(t + 1) � ωv(t) + c1 r1 (p(t) − x(t)) + c2 r2 (g(t) − x(t)), (6)
mainly uses the PBI decomposition method to select the
optimal solution and adaptively provides different penalty x(t + 1) � x(t) + v(t + 1). (7)
values for each weight vector and further improves the
ability of MOPSO to solve MOPs. Some scholars also Among them, ω is the inertia weight, and its size controls
proposed some improved MOPSO. In [14], a modified the size of the search ability. c1 and c2 are the acceleration
particle swarm optimization for multimodal multiobjective factors to make the particles have the ability to self-sum-
optimization was proposed. The algorithm introduces a marize and learn from the outstanding individuals in the
community-based dynamic learning strategy to replace the group. r1 and r2 are random numbers in the range [0,1]. p
global learning strategy, enhances the population diversity, and g, respectively, represent the individual optimal solution
and introduces a competition mechanism to improve the and the global optimal solution.
4 Journal of Mathematics

3. The Proposed GTMSMOPSO Algorithm contribution to the Pareto solution set is selected as the
global optimal sample in the same grid. In this paper, the
The problems of maintaining the size of external archives, inflection point distance of particles is used as the evaluation
selecting learning samples of particles among many non- index of particle convergence.
inferior solutions, maintaining the diversity of the pop- The inflection point distance (IPD) is to determine an
ulation, and preventing the population from falling into extreme straight line through the two extreme noninferior
local optimal solutions are the core of MOPSO. In this paper, solutions in the external archive of two-objective functions
we propose a grid technique and a multistrategy approach in and then calculate the distance from the particle to this
the part of maintenance of external archives and selection of straight line. If there are three or more goal functions, it is
globally optimal particles, to further improve the perfor- necessary to calculate the distance from each solution in the
mance of MOPSO by a variational operation on the posi- noninferior solution set to the extreme “hyperplane.”
tions of particles. The equation of the extreme straight line L is as follows:
Ax + By + C � 0, (10)
3.1. Grid Technique. In MOPSO, each iteration generates a set
of noninferior solutions. The grid technique divides the grid where A, B, and C are all real numbers.
area by the information of all noninferior solution function Supposing that the coordinate of the noninferior solu-
values in the external archive, so that the grid location to which tion x is (x0 , y0 ), then the distance from the noninferior
the noninferior solution belongs can be found according to the solution x to the straight line L is
􏼌􏼌 􏼌
function value of the noninferior solution. The definition of 􏼌􏼌Ax0 + By0 + C􏼌􏼌􏼌
grid technology is introduced as follows. 􏽰������� . (11)
A2 + B 2

3.1.1. Coordinates of the Grid Boundary. In the target space, The hyperplane in n-dimensional space is determined by
the maximum and minimum values of the noninferior the following equation:
solution of the mth objective function are min fm (x) and Ax + b � 0. (12)
max fm (x), respectively, and then the upper and lower
boundaries of the mth target grid are given by Among them, A is an n-order square matrix, x is an
max fm (x) − min fm (x) n-dimensional column vector, and b is a real number,
Lm � min fm (x) − , representing the distance from the hyperplane to the origin.
b
(8) The distance from each solution in the noninferior solution
max fm (x) − min fm (x) is set to the extreme “hyperplane”:
Um � max fm (x) + ,
b
|Ax + b|
where b is the number of mesh divisions. . (13)
‖A‖

3.1.2. Grid Coordinates (Gm). let fm (x) be the function The inflection point of the curve is the “most concave”
value of particle x as the mth dimensional target, then the point on the front surface of Pareto, which contributes the
corresponding grid coordinates are most to the Pareto solution set. IPD is an index reflecting the
convergence of particles in the same grid. The smaller the
fm (x) − Lm IPD index of the particle, the better the convergence of the
Gm (x) � , (9)
dm particle. Taking the dual objective function as an example, as
shown in Figure 1, the black noninferior solution has the
where dm � (Um − Lm /b − 1) � ((max fm (x) − min fm (x)) best convergence, the red noninferior solution is the extreme
+2ep/b − 1) and ep � (max fm (x) − min fm (x)/b) is the noninferior solution, and the noninferior solution above the
grid width extreme straight line has the worst convergence.
To increase the diversity of the population, the most
3.2. Global Optimal Sample Selection Based on Grid Technique dispersed noninferior solution in the same grid is selected as
and Multistrategy. The global optimal particle selection in the global optimal sample. In this paper, the grid density of
MOPSO is a key factor affecting the diversity and conver- particles is used as the evaluation standard of particle
gence of the algorithm. To enhance the convergence and distribution.
diversity of MOPSO, this paper uses two different evaluation The average of the sum of Euclidean distances between
indices (convergence evaluation index and distribution particle xi and all other particles in the same grid is defined
evaluation index) to select noninferior solutions in each grid as the grid density of particle xi , which is defined as follows:
based on the roulette wheel strategy. 􏽲�����������������
1 S M 2
gd xi 􏼁 � 􏽘 􏽐 􏼐fm xi 􏼁 − fm 􏼐xj 􏼑􏼑 , (14)
S s�1 m�1
3.2.1. Selection Method of Convergence Evaluation Index and
Distribution Evaluation Index. To improve the convergence where S is the number of particles in the same grid and M is
of the algorithm, the noninferior solution with the largest the number of objective functions.
Journal of Mathematics 5

f2

0
f1
Figure 1: Selection of the global optimal sample.

gd is an indicator that reflects the distribution of particles of selecting max(g d) as the gbest to guide their flight as a
in the same grid. The larger the gd index of the particles, the way to improve particle exploration.
more dispersed the particle distribution.

3.3. External Archive Maintenance Based on Mixed Evalua-


3.2.2. The Gbest Selection Method of Two Different Evaluation tion Indicators. In MOPSO, each iteration of the algorithm
Indicators on Roulette. To ensure the diversity of the will generate a set of noninferior solutions, and these
population, the noninferior solutions of each grid are noninferior solutions will be stored in the external archive.
selected as the global optimal sample. Due to the different As the algorithm runs, the number of noninferior solutions
number of noninferior solutions in the grid, the more will increase. The external archive needs to be maintained
noninferior solutions are selected by the roulette strategy when the maximum size of the external archive is reached.
for the grid with more noninferior solutions. To further Most MOPSO delete highly crowded particles to maintain
balance the convergence and diversity of the algorithm external archives. However, although this method guaran-
and enable the population to explore more regions, one of tees the uniform distribution of noninferior solutions in the
the convergence evaluation index and diversity evaluation external archive, it is possible to delete noninferior solutions
index is selected according to the current iteration with better convergence. Therefore, this paper adopts a
number of the algorithm to select noninferior solutions in hybrid evaluation index of convergence evaluation index
the same grid. Suppose that the noninferior solution with and distribution evaluation index to delete noninferior
the largest inflection point distance in the j-th grid is solutions in external archives to improve the quality of
max(IPDj ), and the noninferior solution with the largest candidate solutions.
grid density in the j-th grid is max(gdj ). Then, in the j-th It is deleted by finding the grid with the largest number
grid, the particles select max(IPDj ) or max(gdj ) as the of particles. The deletion method is a hybrid evaluation index
gbest probability: strategy of the convergence evaluation index and the dis-
tribution evaluation index of the particles. The specific


⎪ iter formula is as follows:

⎪ max􏼐gdj 􏼑, rand < k × ,
⎨ maxiter ME􏼐xj 􏼑 � IPD􏼐xj 􏼑 × gd􏼐xj 􏼑. (16)
gbesti � ⎪ (15)



⎩ max􏼐IPD 􏼑
j otherwise, The larger the IPD index of the particle, the greater the
contribution of the particle to the Pareto solution set, and the
where rand is a random number between [0,1], iter is the better its convergence. From formula (14), it can be seen that
current number of iterations, itermax is the maximum the larger the gd index of the particle, the more dispersed the
number of iterations, and k is a real number. Through ex- particle distribution. It can be seen from formula (16) that
perimental simulation, the best effect is obtained when the particle ME index can reflect not only the degree of
k � 1.3. From equation (15), it can be seen that the good convergence of the particle but also the distribution of the
particle diversity but poor convergence in the early iteration particle. The smaller the particle ME index, the better the
of the algorithm makes the particles have a higher proba- overall performance of the particle. Therefore, when the
bility of choosing max(IP D) as the gbest to guide the flight external archive reaches the maximum limit, the grid with
as a way to improve the particle exploitation. Particles the largest number of particles is found, and the particles
converge better but are less diverse in the late iterations of with the smaller ME index in the grid are deleted to improve
the algorithm, allowing particles to have a higher probability the quality of the candidate solutions.
6 Journal of Mathematics

3.4. Mutation Operation. MOPSO has strong exploration number of grid particles, use formula (14) to delete the
capabilities in the early iterations and can continuously particles with a smaller ME index, and then introduce
search for new areas. However, MOPSO has a fast con- noninferior solutions into the external archive to update the
vergence effect, which makes the algorithm very likely to external archive.
search around the local optimal solution in the later itera-
tions, which makes the algorithm converge prematurely. Step 7. If the current number of iterations is less than the
Therefore, to make the algorithm maintain good population maximum number of iterations, return to the second step;
diversity in the early iterations and increase population otherwise, output the optimal solution set.
diversity in the later iterations, mutation operations are used The algorithm flowchart of GTMSMOPSO is presented
to increase the degree of position mutation. The position in Figure 2.
variation of particles increases linearly with the increase in
the number of iterations. At the beginning of the iteration, 4. Experimental Simulation Analysis
the diversity of the population is good, so that the variation
of the particles is small, so that the diversity of the pop- 4.1. Performance Evaluation Index. To evaluate the perfor-
ulation remains stable. In the later stage of the iteration, the mance of each algorithm, this paper uses inverse generation
diversity of the population is insufficient, which makes the distance (IGD) [20] and super volume (HV) [21] to evaluate
particles vary greatly to increase the diversity of the pop- the algorithm respectively. IGD is a measure of the distance
ulation. In GTMSMOPSO, the update formula of particle between the true Pareto front and the approximate Pareto
velocity is given in equation (6), and the update formula of front. The lower the IGD value, the better the convergence
particle position is as follows: and diversity of the approximate Pareto frontier obtained by
the algorithm, and the closer it is to the true Pareto frontier.
e− (n/N) Its calculation formula is as follows:
x(t + 1) � x(t) + v(t + 1), (17)
1 + e− (n/N) 􏽐V∈P d(v, Q)
IGD(P, Q) � , (18)
where itermax is the maximum number of iterations, iter is |P|
the current iteration number, and e is the natural logarithm.
where P is the set of solutions uniformly distributed over the
true PF and|P| is the number of individuals in the set of
3.5. The Main Flow of the Algorithm. The main flow of solutions distributed over the true PF.Q is the set of Pareto
GTMSMOPSO is as follows. optimal solutions obtained by the algorithm, and d(v, Q) is
the minimum Euclidean distance from a single v in P to the
Step 1. Set the number of populations, the maximum population Q.
number of external archives, the dimension of particles, the The HV measures the noninferiority distribution of the
number of grids, and the maximum number of iterations. algorithm in the space, and a larger HV indicates a better
Initialize the position and velocity randomly, set the initial noninferiority distribution in the space.
value of each particle position as the best value of the in-
dividual particle, and create an external archive and set it to
an empty set. 4.2. Selection of Parameters. The parameter settings of
GTMSMOPSO are as follows: the external archive size is set
Step 2. Calculate the fitness of each particle and store the to 200, the population size is set to 200, the maximum
noninferior solutions in an external archive. number of iterations is 2000, the number of grids is 50,
c1�c2�2, and ω � 0.4. The parameter settings of the other
Step 3. Establish grids for the target space, and calculate the four comparisons of multiobjective particle swarm opti-
noninferior solutions in each grid with the inflection point mization are consistent with the original literature.
distance formula and the grid density formula.
4.3. Experimental Results and Data Analysis. To verify the
Step 4. The best sample of an individual is the best between effectiveness of GTMSMOPSO, we selected typical multi-
the current position and the best position in the history of objective test function sets ZDT [22], UF [23], and DTLZ
the individual. If the rankings are equal, one of them is [24], representative 14 multiobjective functions (ZDT1-4,
randomly selected. First use the roulette method to deter- ZDT6, UF2-5, UF8-10, DTLZ1, and DTLZ6), and com-
mine the number of noninferior solutions that need to be parative algorithms including CMOPSO [25], NSGAII [26],
selected in each grid, and then use equation (15) to deter- MOEAD [27], MOPSOCD [28], and NMPSO [29]. All al-
mine the global optimal sample for the same grid. gorithms, except GTMSMOPSO, run on the platform [30].
The mean (Mean) and standard deviation (Std.) of the IGD
Step 5. Use flight formulas (6) and (17) to update the speed metrics and HV metrics for GTMSMOPSO and the five
and position of each particle. multiobjective intelligence algorithms on the 14 tested
functions are given in Tables 1 and 2, respectively. The
Step 6. If the external archive does not reach saturation, bolded data in the table represent the best values.
continue to add noninferior solutions. If the external archive The multiobjective test function sets ZDT (ZDT1-4 and
reaches a saturated state, find the grid with the largest ZDT6) are all biobjective test functions. ZDT1 has a convex
Journal of Mathematics 7

Start

Initialize the particle swarm

Calculate the fitness value and


perform pareto ranking

Use ME indication to update and


maintain external archive update

Update pbest based on Pareto


dominance

Use equation (15) to update gbest

Flight

Whether to terminate
No

Yes end

Figure 2: Algorithm flowchart of GTMSPSO.

Table 1: The IGD value of the algorithm on 14 test functions.


Function IGD GTMSMOPSO CMOPSO NSGAII MOEAD MOPSOCD NMPSO
Mean 5.65E-03 3.25E-03 3.87E-02 1.89E-01 2.05E-02 3.16E-02
ZDT1
Std. 6.97E-04 5.13E-04 8.49E-03 7.89E-02 6.41E-02 9.92E-03
Mean 5.99E-03 2.80E-03 1.09E-01 5.66E-01 1.43E-01 6.25E-02
ZDT2
Std. 9.06E-04 3.78E-04 9.68E-02 7.72E-02 2.24E-01 1.32E-01
Mean 2.05E-01 3.56E-03 3.27E-02 1.73E-01 3.83E-02 9.79E-02
ZDT3
Std. 6.89E-03 6.42E-04 8.31E-03 6.56E-02 4.76E-02 7.02E-03
Mean 5.90E-03 1.63E+02 3.14E+01 4.77E-01 1.57E+02 1.37E+02
ZDT4
Std. 6.89E-04 3.42E+01 5.00E+00 1.92E-01 3.07E+01 2.71E+01
Mean 2.20E-03 2.72E-01 2.99E+00 8.37E-02 1.39E+00 2.21E-03
ZDT6
Std. 1.92E-04 2.65E-01 2.10E-01 2.12E-02 1.80E+00 1.60E-04
Mean 8.26E-02 6.38E-02 6.63E-02 2.17E-01 1.38E-01 8.19E-02
UF2
Std. 7.73E-03 4.76E-03 6.48E-03 7.33E-02 1.45E-02 7.06E-03
Mean 3.18E-01 3.97E-01 4.40E-01 3.47E-01 3.65E-01 3.64E-01
UF3
Std. 3.12E-02 2.56E-02 1.82E-02 2.85E-02 5.30E-02 5.59E-02
8 Journal of Mathematics

Table 1: Continued.
Function IGD GTMSMOPSO CMOPSO NSGAII MOEAD MOPSOCD NMPSO
Mean 5.55E-02 1.09E-01 8.07E-02 1.15E-01 7.82E-02 6.39E-02
UF4
Std. 4.55E-03 1.12E-02 2.33E-03 4.84E-03 7.18E-03 8.78E-03
Mean 1.93E+00 8.45E-01 8.20E-01 1.36E+00 3.92E+00 1.69E+00
UF5
Std. 1.70E-01 1.75E-01 2.98E-01 2.73E-01 4.67E-01 4.19E-01
Mean 4.26E-01 5.54E-01 3.12E-01 5.01E-01 7.61E-01 4.48E-01
UF8
Std. 8.11E-02 1.05E-01 4.58E-02 2.53E-01 1.83E-01 1.18E-01
Mean 4.34E-01 8.45E-01 4.49E-01 5.36E-01 9.05E-01 4.70E-01
UF9
Std. 2.67E-02 1.14E-01 6.02E-02 1.05E-01 1.29E-01 6.12E-02
Mean 4.88E-01 4.51E+00 1.44E+00 7.04E-01 5.23E+00 1.50E+00
UF10
Std. 1.91E-01 4.79E-01 5.08E-01 9.49E-02 8.63E-01 3.41E+00
Mean 6.34E-02 7.46E+01 5.38E+00 5.79E+00 4.29E+01 4.25E+01
DTLZ1
Std. 1.86E-02 1.03E+01 1.53E+00 4.08E+00 1.56E+01 6.56E+00
Mean 2.62E-02 1.47E-01 4.05E-03 1.87E-01 5.76E-02 1.29E-02
DTLZ6
Std. 3.61E-02 3.32E-01 1.46E-03 3.34E-01 1.65E-01 2.18E-03

Table 2: The HV value of the algorithm on 14 test functions.


Function HV GTMSMOPSO CMOPSO NSGAII MOEAD MOPSOCD NMPSO
Mean 7.18E-01 7.20E-01 6.69E-01 5.41E-01 7.00E-01 6.87E-01
ZDT1
Std. 7.16E-04 6.96E-04 1.38E-02 5.36E-02 7.54E-02 1.18E-02
Mean 4.41E-01 4.45E-01 3.29E-01 9.20E-02 3.30E-01 4.03E-01
ZDT2
Std. 1.31E-03 6.18E-04 5.30E-02 2.53E-02 1.68E-01 9.11E-02
Mean 6.55E-01 6.00E-01 5.77E-01 5.58E-01 5.82E-01 5.69E-01
ZDT3
Std. 2.89E-03 1.31E-03 4.85E-03 7.46E-02 3.33E-02 4.15E-03
Mean 7.18E-01 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
ZDT4
Std. 1.42E-03 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
Mean 3.84E-01 1.86E-01 0.00E+00 0.00E+00 1.82E-01 3.90E-01
ZDT6
Std. 1.19E-03 1.49E-01 0.00E+00 0.00E+00 1.51E-01 1.37E-04
Mean 6.28E-01 6.45E-01 6.37E-01 5.58E-01 5.45E-01 6.22E-01
UF2
Std. 6.92E-03 6.62E-03 5.41E-03 3.05E-02 1.85E-02 7.51E-03
Mean 6.30E-01 2.62E-01 2.11E-01 2.91E-01 2.66E-01 2.75E-01
UF3
Std. 6.75E-03 2.78E-02 1.19E-02 3.91E-02 4.02E-02 5.43E-02
Mean 3.35E-01 2.85E-01 3.34E-01 2.85E-01 3.34E-01 3.59E-01
UF4
Std. 2.56E-02 9.46E-03 3.71E-03 5.11E-03 9.82E-03 1.32E-02
Mean 3.70E-01 1.67E-02 1.06E-02 0.00E+00 0.00E+00 6.91E-05
UF5
Std. 5.95E-03 2.89E-02 2.42E-02 0.00E+00 0.00E+00 3.79E-04
Mean 3.88E-01 1.21E-02 2.82E-01 1.72E-01 8.58E-03 2.85E-01
UF8
Std. 1.94E-02 1.72E-02 3.36E-02 6.56E-02 1.44E-02 4.65E-02
Mean 2.84E-01 2.37E-02 2.95E-01 2.80E-01 2.18E-02 3.14E-01
UF9
Std. 3.00E-02 2.75E-02 5.75E-02 6.53E-02 2.29E-02 5.54E-02
Mean 2.66E-01 0.00E+00 0.00E+00 4.02E-02 0.00E+00 0.00E+00
UF10
Std. 6.06E-02 0.00E+00 0.00E+00 2.66E-02 0.00E+00 0.00E+00
Mean 7.60E-01 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
DTLZ1
Std. 3.43E-02 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
Mean 1.63E-01 1.69E-01 1.99E-01 1.44E-01 1.90E-01 1.98E-01
DTLZ6
Std. 4.01E-02 7.15E-02 3.10E-03 7.38E-02 2.77E-02 6.18E-04

PF and ZDT2 has a nonconvex PF. From Tables 1 and 2, it best HV value on the ZDT3 test function, although it has a
can be seen that both GTMSMOPSO and CMOPSO have poor IGD value. ZDT4 has many locally optimal solutions
smaller IGD values and larger HV values on ZDT1 and and has convex PFs, and ZDT6 has nonconvex and non-
ZDT2 test functions, so they have better overall perfor- uniformly spaced PFs. GTMSMOPSO showed the best IGD
mance. The performance of NSGAII and MOEAD is slightly and HV values on the ZDT4 test function. GTMSMOPSO
worse. ZDT3 has a broken Pareto front and is nonconvex. It showed the best IGD value on the ZDT6 test function, and
can be seen from Tables 1 and 2 that GTMSMOPSO has the its HV value was slightly lower than the HV value of
Journal of Mathematics 9

ZDT1 ZDT1
1 1

0.8 0.8

0.6 0.6
f2 f2
0.4 0.4

0.2 0.2

0 0
0 0.5 1 0 0.5 1
f1 f1

Pareto front CMOPSO


GTMSMOPSO Pareto front
(a) (b)
ZDT1 ZDT1
1
1

0.8
0.8

0.6 0.6
f2 f2
0.4 0.4

0.2 0.2

0 0
0 0.5 1 0 0.5 1
f1 f1

Pareto front Pareto front


NSGAII MOEAD
(c) (d)
ZDT1 ZDT1
1 1

0.8 0.8

0.6 0.6
f2 f2
0.4 0.4

0.2 0.2

0 0
0 0.5 1 0 0.5 1
f1 f1

MOPSOCD NMPSO
pareto front pareto front
(e) (f )

Figure 3: Pareto frontier effect of three different algorithms for ZDT1 function.

NMPSO. In summary, it can be seen that GTMSMOPSO has The multiobjective test functions UF2-UF5 are bio-
the best performance on the multiobjective test function set bjective test functions and UF8-10 are three-objective test
ZDT series. function. As can be seen from Tables 1 and 2, GTMSMOPSO
10 Journal of Mathematics

ZDT2 ZDT2
1 1

0.8 0.8

0.6 0.6
f2 f2
0.4 0.4

0.2 0.2

0 0
0 0.5 1 0 0.5 1
f1 f1

Pareto front Pareto front


GTMSMOPSO CMOPSO
(a) (b)
ZDT2 ZDT2
1
1

0.8 0.8

0.6 0.6
f2 f2

0.4 0.4

0.2 0.2

0 0
0 0.5 1 0 0.5 1
f1 f1

Pareto front Pareto front


NSGAII MOEAD
(c) (d)
ZDT2 ZDT2
1

0.8
1

0.6
f2 f2
0.5 0.4

0.2

0 0
0 0.5 1 0 0.5 1
f1 f1

MOPSOCD NMPSO
pareto front pareto front
(e) (f )

Figure 4: Pareto frontier effect of three different algorithms for ZDT2 function.
Journal of Mathematics 11

ZDT4 ZDT4
1

0.8
10

0.6
f2 f2
0.4 5

0.2

0 0
0 0.5 1 0 0.5 1
f1 f1

Pareto front Pareto front


GTMSMOPSO CMOPSO
(a) (b)
ZDT4 ZDT4
4

2
3
1.5
f2 2 f2
1

1
0.5

0 0
0 0.5 1 0 0.5 1
f1 f1

Pareto front Pareto front


NSGAII MOEAD
(c) (d)
ZDT4 ZDT4
20
20

15
15

f2 f2
10
10

5 5

0 0
0 0.5 1 0 0.5 1
f1 f1

MOPSOCD NMPSO
pareto front pareto front
(e) (f )

Figure 5: Pareto frontier effect of three different algorithms for ZDT4 function.

has four optimal IGD averages and three optimal HV av- function. In summary, it can be seen that GTMSMOPSO
erages in the UF series of test functions. GTMSMOPSO outperforms the other three multiobjective intelligent al-
performs suboptimally for the IGD value in the UF8 test gorithms in the UF series of test functions.
12 Journal of Mathematics

ZDT6 ZDT6
1 2.5

0.8 2

0.6 1.5
f2 f2
0.4 1

0.2 0.5

0 0
0.2 0.4 0.6 0.8 1 0.4 0.6 0.8 1
f1 f1

Pareto front Pareto front


GTMSMOPSO CMOPSO
(a) (b)
ZDT6 ZDT6
1
2

0.8
1.5
0.6
f2 f2
1
0.4

0.5 0.2

0 0
0.4 0.6 0.8 1 0.4 0.6 0.8 1
f1 f1

Pareto front Pareto front


NSGAII MOEAD
(c) (d)
ZDT6 ZDT6

8 0.8

6 0.6

f2 f2
4 0.4

2 0.2

0 0
0.4 0.6 0.8 1 0.4 0.6 0.8 1
f1 f1

MOPSOCD NMPSO
pareto front pareto front
(e) (f )

Figure 6: Pareto frontier effect of three different algorithms for ZDT6 function.
Journal of Mathematics 13

UF10 UF10

12
6
10
8
f3 4
f3 6
2 4
2
0
1
1 2 2
0.5 4 4
0.5 6 6
f2 f2 8 8 f1
0 f1 10 10
0 12
Pareto Front CMOPSO
GTMSMOPSO pareto front
(a) (b)
UF10 UF10

1.2
4 1
0.8
f3 f3 0.6
2
0.4
0.2

1 1 0.2 0.2
2 2 0.4 0.4
3 0.6 0.6 f1
f2 3 f1 f2 0.8 0.8
4 4
NSGAII MOEAD
Pareto front pareto front

(c) (d)
UF10 UF10

3
6
2
f3 4 f3

2 1

2 2 0.5 0.5
4 4 1
6 6 f2 1 f1
f2 8 f1 1.5
8 1.5

MOPSOCD NMPSO
pareto front pareto front
(e) (f )

Figure 7: Pareto frontier effect of three different algorithms for UF10 function.

The multiobjective test functions DTLZ1 and DTLZ6 are GTMSMOPSO algorithm achieves the best IGD and HV
both three-objective test functions. DTLZ1 is a multimodal values in the DTLZ1 function, which reflects a good per-
function containing many local Pareto hyperplanes. The formance. The GTMSMOPSO algorithm in the DTLZ6
14 Journal of Mathematics

DTLZ1 DTLZ1

0.6

0.4 100
f3
f3
0.2 50

0
0.5 0
0.6 0 0
0.4 100 100
f2 200 f
0.2 f2 200 1
0 f1 300
0

Pareto front CMOPSO


GTMSMOPSO pareto front
(a) (b)
DTLZ1 DTLZ1

6
20

4
f3 10 f3
2

0
0 0
5 1 1
10 10 2 2
f1 3 3
f2 15 20 f2 4 4 f1

NSGAII MOEAD
pareto front pareto front
(c) (d)
DTLZ1 DTLZ1

50

f3 f3 50

0 0
0 0 0 0
20 20 20
40 50 40 40
60 f1 f2 60 60 f1
f2 80 80 80
100
MOPSOCD NMPSO
pareto front pareto front
(e) (f )

Figure 8: Pareto frontier effect of three different algorithms for DTLZ1 function.
Journal of Mathematics 15

ZDT1 ZDT2

2
3

1.5
2
IGD

IGD
1

1
0.5

2000 4000 6000 8000 10000 2000 4000 6000 8000 10000
Number of evaluations Number of evaluations

CMOPSO GTMSMOPSO CMOPSO GTMSMOPSO


NSGAII MOPSOCD NSGAII MOPSOCD
MOEAD NMPSO MOEAD NMPSO
(a) (b)
ZDT4 ZDT6
80
6
60 5
4
IGD
IGD

40
3
2
20
1

2000 4000 6000 8000 10000 2000 4000 6000 8000 10000
Number of evaluations Number of evaluations

CMOPSO GTMSMOPSO CMOPSO GTMSMOPSO


NSGAII MOPSOCD NSGAII MOPSOCD
MOEAD NMPSO MOEAD NMPSO
(c) (d)
UF10 DTLZ1

15
150

10
100
IGD

IGD

5 50

2000 4000 6000 8000 10000 2000 4000 6000 8000 10000
Number of evaluations Number of evaluations

CMOPSO GTMSMOPSO CMOPSO GTMSMOPSO


NSGAII MOPSOCD NSGAII MOPSOCD
MOEAD NMPSO MOEAD NMPSO
(e) (f )

Figure 9: Convergence curves of different algorithms on 6 test functions.

function has suboptimal IGD values and HV values. In functions (UF10 and DTLZ1) with the true Pareto front
summary, it can be seen that the performance of the shown. Experimental simulations show that NSGAII and
GTMSMOPSO algorithm in the test function of the DTLZ MOEAD both suffer from multiple underdiversity and
series is good. underconvergence on the test functions in the ZDT series,
To visualize the convergence and diversity of the non- and CMOPSO suffers from severe underdiversity and
inferior solution sets obtained by each algorithm, underconvergence on the ZDT4 test function, as seen in
Figures 3–8 show the noninferior solution sets obtained by Figures 3–6. However, GTMSMOPSO not only approxi-
each algorithm on the two-objective test functions (ZDT1, mates the true Pareto front on the test functions in the ZDT
ZDT2, ZDT4, and ZDT6) and the three-objective test series, but also has better distributivity. Therefore,
16 Journal of Mathematics

GTMSMOPSO has better convergence and distribution of [2] J. Luo, A. Gupta, and YS. Ong, “Evolutionary optimization of
the test functions in the ZDT series. As can be seen from expensive multi-objective problems with co-sub-Pareto front
Figures 7 and 8, GTMSMOPSO is closer to the true Pareto Gaussian process surrogates,” IEEE Transactions on Cyber-
front in the three-objective test functions (UF10 and netics, vol. 49, no. 5, pp. 1708–1721, 2018.
DTLZ1), and the other five algorithms show multiple [3] X. Wang, L. Ma, S. Yang et al., “An aggregated pairwise
comparison-based evolutionary algorithm for multi-objective
underconvergence or underdistribution. In summary, it can
and many-objective optimization,” Applied Soft Computing,
be seen that GTMSMOPSO has good convergence and vol. 96, Article ID 106641, 2020.
distributivity compared with the other five compared [4] W. Peng, L. Bo, and Z. Wen, “Adaptive region adjustment to
algorithms. improve the balance of convergence and diversity in MOEA/
From Figure 9, it can be concluded that GTMSMOPSO D,” Applied Soft Computing, vol. 70, pp. 797–813, 2018.
can converge faster to a smaller and stable IGD value with [5] C. Bao, L. Xu, and E. D. Goodman, “A new dominance-re-
ZDT1, ZDT2, ZDT4, ZDT6, UF10, and DTLZ1 test func- lation metric balancing convergence and diversity in multi-
tions among the 6 multiobjective test functions. Therefore, and many-objective optimization,” Expert Systems with Ap-
the algorithm in this paper outperforms the other five plications, vol. 134, pp. 14–27, 2019.
comparison algorithms. [6] G. Chen and J. Li, “A diversity ranking based evolutionary
algorithm for multi-objective and many-objective optimiza-
tion,” Swarm and Evolutionary Computation, vol. 48, no. 1,
5. Conclusion pp. 274–287, 2019.
In this paper, we propose a multiobjective particle swarm [7] G. I. Chaman, C. A. Coello Coello, and A. A.-M. MOPSOhv,
“A new hypervolume-based multi-objective particle swarm
algorithm based on grid technology and multistrategy. The
optimizer,” Evolutionary Computation IEEE, pp. 266–273,
algorithm is improved by the maintenance of external ar- 2014.
chives and the selection of global optimal samples, and the [8] F. Li, J. Liu, and S. Tan, “R2-MOPSO: a multi-objective
variational operation of positions is proposed. For the particle swarm optimizer based on R2-indicator and de-
maintenance of the external archive, the particles in the grid composition,” Evolutionary Computation IEEE, pp. 3148–
with the highest number of particles are removed by using a 3155, 2015.
mixed evaluation index strategy, which provides a better [9] J. Liu, F. Li, and X. Kong, “Handling many-objective opti-
quality of particles for selecting the global optimal solution. misation problems with R2 indicator and decomposition-
In the selection of globally optimal samples, the particles in based particle swarm optimiser,” International Journal of
the same grid are selected according to the current number Systems Science, vol. 50, no. 1-4, pp. 320–336, 2019.
of iterations using one of the inflection distance strategies [10] C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, “Handling
multiple objectives with particle swarm optimization,” IEEE
and the grid density strategy to achieve a balance between
Transactions on Evolutionary Computation, vol. 8, no. 3,
algorithm exploration and exploitation, thus improving the
pp. 256–279, 2004.
diversity and convergence of the algorithm. To further [11] G. Li, W. Wang, W. Zhang, Z. Wang, H. Tu, and W. You,
enhance the diversity of the population, a linear incremental “Grid search based multi-population particle swarm opti-
variation of the position of the particles is performed to mization algorithm for multimodal multi-objective optimi-
enhance the exploration capability of the algorithm. To zation,” Swarm and Evolutionary Computation, vol. 62,
verify the effectiveness of the algorithm in this paper, Article ID 100843, 2021.
simulation experiments are performed on 14 multiobjective [12] A. N. N. G. Kayakutlu, “Multi-objective particle swarm op-
functions (ZDT1-ZDT4, ZDT6, UF2-UF5, UF8-UF10, timization with random immigrants,” Complex Intelligent
DTLZ1, and DTLZ6) of this paper’s algorithm and five other Systems, vol. 6, no. 3, pp. 635–650, 2020.
multiobjective particle swarm algorithms. The experimental [13] F. Han, W.-T. Chen, Q.-H. Ling, and H. Han, “Multi-objective
simulation results show that the algorithm in this paper has particle swarm optimization with adaptive strategies for
feature selection,” Swarm and Evolutionary Computation,
good convergence and diversity and has a good spatializa-
vol. 62, no. 6, Article ID 100847, 2021.
tion effect. [14] X. Zhang, H. Liu, and L. Tu, “A modified particle swarm
optimization for multimodal multi-objective optimization,”
Data Availability Engineering Applications of Artificial Intelligence, vol. 95,
Article ID 103905, 2020.
The data used to support the findings of this study are in- [15] B. Qu, C. Li, and J. Liang, “A self-organized speciation based
cluded within the article. multi-objective particle swarm optimizer for multi-modal
multi-objective problems,” Applied Soft Computing, vol. 86,
Conflicts of Interest Article ID 105886, 2019.
[16] V. Trivedi, P. Varshney, and M. Ramteke, “A simplified multi-
The authors declare that they have no conflicts of interest. objective particle swarm optimization algorithm,” Swarm
Intelligence, vol. 14, pp. 1–34, 2020.
References [17] H.-T. Chen, W.-C. Wang, X.-N. Chen, and L. Qiu, “Multi-
objective reservoir operation using particle swarm optimi-
[1] R. M. Rizk-Allah, A. E. Hassanien, and A. Slowik, “Multi- zation with adaptive random inertia weights,” Water Science
objective orthogonal opposition-based crow search algorithm and Engineering, vol. 13, no. 2, pp. 136–144, 2020.
for large-scale multi-objective optimization,” Neural Com- [18] L. Chen, Q. Li, X. Zhao, Z. Fang, F. Peng, and J. Wang, “Multi-
puting & Applications, vol. 32, no. 17, pp. 13715–13746, 2020. population coevolutionary dynamic multi-objective particle
Journal of Mathematics 17

swarm optimization algorithm for power control based on


improved crowding distance archive management in CRNs,”
Computer Communications, vol. 145, pp. 146–160, 2019.
[19] J. Kennedy and C. Eberhart, “Particle swarm optimization,” in
Proceedings of the IEEE International Conference on Neural
Neural Networks, pp. 1942–1948, Washington, DC, USA, June
1995.
[20] Q. Qingfu Zhang, A. Aimin Zhou, and Y. Yaochu Jin, “RM-
MEDA: a regularity model-based multiobjective estimation of
distribution algorithm,” IEEE Transactions on Evolutionary
Computation, vol. 12, no. 1, pp. 41–63, 2008.
[21] J. Guillermo Falcn-Cardona and A. Carlos, “Convergence and
diversity analysis of indicator-based multi-objective evolu-
tionary algorithms,” in Proceedings of the Genetic and Evo-
lutionary Computation Conference, Prague, Czech, July 2019.
[22] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multi-
objective evolutionary algorithms: empirical results,” Evolu-
tionary Computation, vol. 8, no. 2, pp. 173–195, 2000.
[23] Q. Zhang, A. Zhou, and S. Zhao, Multi-objective Optimization
Test Instances for the CEC 2009 Special Session and Compe-
tition, Mechanical Engineering, New York, NY, USA, 2008.
[24] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable
multi-objective optimization test problems,” in Proceedings of
the Congress on Evolutionary Computation, vol. 1, pp. 825–
830, Hong Kong, China, June 2002.
[25] X. Zhang, X. Zheng, R. Cheng, J. Qiu, and Y. Jin, “A com-
petitive mechanism based multi-objective particle swarm
optimizer with fast convergence,” Information Sciences,
vol. 427, pp. 63–76, 2018.
[26] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and
elitist multiobjective genetic algorithm: nsga-II,” IEEE
Transactions on Evolutionary Computation, vol. 6, no. 2,
pp. 182–197, 2002.
[27] Q. Hui Li and H. Li, “MOEA/D: a multiobjective evolutionary
algorithm based on decomposition,” IEEE Transactions on
Evolutionary Computation, vol. 11, no. 6, pp. 712–731, 2007.
[28] C. R. Raquel and P. C. Naval Jr, “An effective use of crowding
distance in multi-objective particle swarm optimization,” in
Proceedings of the 7th Annual Conference On Genetic And
Evolutionary Computation, pp. 257–264, Qingdao, China,
November 2005.
[29] Q. Lin, S. Liu, Q. Zhu et al., “Particle swarm optimization with
a balanceable fitness estimation for many-objective optimi-
zation problems,” IEEE Transactions on Evolutionary Com-
putation, vol. 22, no. 1, pp. 32–46, 2018.
[30] Y. Tian, R. Cheng, X. Zhang, and Y. Jin, “Platemo: a MATLAB
platform for evolutionary multi-objective optimization [ed-
ucational forum],” IEEE Computational Intelligence Maga-
zine, vol. 12, no. 4, pp. 73–87, 2017.

You might also like