0% found this document useful (0 votes)
27 views

Midrange Exploration Exploitation Searching Particle Swarm Optimization in Dynamic Environment

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Midrange Exploration Exploitation Searching Particle Swarm Optimization in Dynamic Environment

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2021 International Conference on Software Engineering & Computer Systems and 4th International Conference on

Computational Science and Information Management (ICSECS-ICOCSIM)


2021 International Conference on Software Engineering & Computer Systems and 4th International Conference on Computational Science and Information Management (ICSECS-ICOCSIM) | 978-1-6654-1407-4/21/$31.00 ©2021 IEEE | DOI: 10.1109/ICSECS52883.2021.00124

Midrange Exploration Exploitation Searching


Particle Swarm Optimization in Dynamic
Environment
Nurul Izzatie Husna Fauzi Zalili Musa Nor Saradatul Akmar Zulkifli
Faculty of Computing, Faculty of Computing, Faculty of Computing,
Universiti Malaysia Pahang, Universiti Malaysia Pahang, Universiti Malaysia Pahang,
26600 Pekan, Pahang. 26600 Pekan, Pahang. 26600 Pekan, Pahang.
[email protected] [email protected] [email protected]
Abstract—Conventional Particle Swarm Optimization was space [10]. When the particle is located near to the global
introduced as an optimization technique for real problems best, it easily to converge at local optimal. When the
such as scheduling, tracking, and traveling salesman. particles deviate too far from the local optimal value, the
However, conventional Particle Swarm Optimization still has convergence speed of the algorithm will be reduced.
limitations in finding the optimal solution in a dynamic
environment. Therefore, we proposed a new enhancement
Next, every particle should have self-adaption in order
method of conventional Particle Swarm Optimization called to get a good searching result [10]. Self-adaptation is the
Midrange Exploration Exploitation Searching Particle Swarm ability of an individual to adapt to any problems by
Optimization (MEESPSO). The main objective of this reconfiguring itself accordingly[11]–[14]. Self-adaptation
improvement is to enhance the searching ability of poor is important to spread the particle information to other
particles in finding the best solution in dynamic problems. In particles. This information will be used in compete to each
MEESPSO, we still applied the basic process in conventional other to find the best solution for local exploitation and
Particle Swarm Optimization such as initialization of particle global exploration. Moreover, without self-adaptation, the
location, population evolution, and updating particle location. local best particles have no interest in competing to each
However, we added some enhancement processes in
MEESPSO such as updating the location of new poor particles
other. This occurs when the particles are attracted to a single
based on the average value of the particle minimum fitness and position in search space, they are too focused on updating
maximum fitness. To see the performance of the proposed their position based on the current best location and start
method, we compare the proposed method with three existing ignoring the other particles that may have the potential to
methods such as Conventional Particle Swarm Optimization, founding the optimum value. When the particle does not
Differential Evaluation Particle Swarm Optimization, and have independent movement, it will cause the particle
Global Best Local Neighborhood Particle Swarm trapped into local optimal. Thus, this problem usually
Optimization. Based on the experimental result of 50 datasets discourages the ability of the particles to enhan ce their
show that MEESPSO can find the quality solution in term of performance by exploring for new local optimal.
number of particle and iteration, consistency, convergence,
Furthermore, when there are no better solution found,
optimum value, and error rate.
the particles are easily attracted to the global best and
Keywords — PSO, Optimization, Dynamic problem contribute to local exploitation [15]–[18]. Thus, this
problem will lead the particles to be trapped into local
I. INTRODUCTION optimal. Other, it also may cause premature convergence to
The conventional Particle Swarm Optimization (PSO) occur. Usually, premature convergence occur due to the
was initially proposed by Kennedy and Eberhart in 1995 as declaration of the particles velocity, which results in the
a population-based algorithm which is inspired by bird stagnation of particles fitness [19], [20]. This happens when
swarming, fish swarming and human social behavior. As a the algorithm quickly reaches convergence in the earlier
population-based algorithm, the conventional PSO is used iterations and every particle chooses a same position which
as an optimization technique in solving the dynamic is global best[21]–[23].
problem in image processing, tracking, and scheduling Based on the studies, the biggest challenge for PSO is
problems. how to achieve the exact global optimal solutions and avoid
In PSO, the population of particles are randomly the particles from being trapped into local optimal. As a
initialized in the search space. This particle is able to find consequence, to overcome these problems, a new
the optimal value of the function by updating the velocity of enhancement PSO algorithm has been proposed. Hence, the
each particle in the population through multiple iteration. contribution of this paper is: 1) to develop a new variant of
Although the conventional PSO can search for the optimum PSO algorithm that can enhance the exploration and
value but the particles are easily trapped into the local exploitation capability, 2) to compare the new variant of
optimal [1]–[6]. This problem is usually caused by PSO with the other enhancement PSO algorithms by using
unbalancing particle exploration and exploitation in the a dynamic dataset.
search space[7]. The following section is organized as follows. Section
Furthermore, we have identified the causes that lead the 2 presents the related word that linked to the improvement
particles trapped into local optimal due to exploration. of PSO. Meanwhile, section 3 will discuss the selected
Firstly, the particle initialization is done randomly through algorithm used for this experiment. The experimental setup
the search space [8], [9] exploration . This is because large- will be briefed in Section 4. Lastly, the results of the
scale random values can expand the range. This will cause experiment are concluded are presented in Section 5 and
the particle location to become unscathed through search Section 6.

978-1-6654-1407-4/21/$31.00 ©2021 IEEE 649


DOI 10.1109/ICSECS52883.2021.00124

Authorized licensed use limited to: UNIVERSITY TEKNOLOGI MALAYSIA. Downloaded on June 01,2022 at 03:16:39 UTC from IEEE Xplore. Restrictions apply.
II. RELATED WORK bring the premature convergence problem, which the
At the beginning of PSO studies, most researchers particle may get easily trapped into local optima too early
applied conventional PSO to find the optimum solution in a [34]–[37].
static problem. However, conventional PSO still have Based on the studies, most researchers proposed a new
weakness especially in term of exploration and exploitation. modification in other to overcome the problem of the
Furthermore, researchers propose a new enhancement in particles that always trapped in local optima[38]–[40].
PSO in order to solve the dynamic problem[24]–[27]. Furthermore, researchers also consider an improvement
For example, Luitel and Venayagamoorthy research method to balance the global exploration and local
uses a hybrid technique called Differential Evolution exploitation. This improvement is including by finding a
Particle Swarm Optimization (DEPSO) to solve the problem new approach in PSO or hybrid the PSO with the other
of digital filters [28]. The result shows DEPSO employs the population-based algorithms [1]–[6]. Most of the
best methodology for both algorithm which resulting in a enhancement had shown their ability to find the quality
reduction in design time. Furthermore, the fitness of each result.
particle is significantly enhanced because hybridization III. SWARM INTELLIGENT ALGORITHM
method protects the particles from being trapped and
leading them to the new global best solution. However, this
algorithm is more suitable to particles that have different A. Particle Swarm Optimization (PSO)
weight error. The mutation between four random parents
Basically, there are three steps in PSO algorithm: 1)
that have same weighted error might give less impressive
Initialize the particle location, 2) the evaluation of the
result for the new child. This may cause the parent result to
particles fitness value to determine the local best and global
be better than child result. Therefore, the parent will retain
best, and 3) updating the particle position using velocity.
in the next iteration. The continuous situation will cause the
1. The first step is initializing the particle location. At
particle to easily trap into local optimal.
first process, the location of particles is distributed
Furthermore, Musa et al. proposed Global Best Local
through the predefined dimension space area using a
Neighborhood in PSO (GbLN-PSO) in order to find the
random method as on following equation (1).
optimum solution in a dynamic environment [10], [29]–
= ( , ) (1)
[31]. The proposed algorithms need to generate the small
where holds the random location of particle i
population for local neighborhood. This step is generated to
represented by row, and column, . While represent
find the best solution between local neighborhood before
exploit into global best. The result show GbLN-PSO can the number of the particle, = {1,2,3, …, }.
find the quality result in a dynamic environment dataset. Meanwhile, the maximum number of particles is
However, GbLN-PSO uses a greater number of particles denoted by the letter .
than conventional PSO to present as local neighborhoods. 2. The second step is by evaluating the particles fitness
Thus, this makes the GbLN-PSO complexity is incensing. value to determine the local best and global best. In
Grag present a hybrid technique of PSO and Genetic this process, the particles are moving along the
Algorithm(GA) to solve the limitation of both trajectory is called local best ( ). While the particles
algorithms[32]. Besides, PSO is implemented to evaluate best location is known as global best( ). The best
the vector in these techniques, whereas GA is used to position is chosen based on the smallest fitness value
improve the decision vectors using genetic operator such as ( ) of all the particles fitness ( ).
crossover and mutation. Based on the result, the capability 3. The next step is updating the particle position using
of particles exploration and particle exploitation was velocity. The particles update their velocity and
improved from the conventional techniques. However, position in search space depending on the optimal
without a selection operator, PSO may waste resources on position that found by their own and the entire
poor individuals. Thus, the poor individual cannot compete population. Furthermore, the velocity updating all the
with other individual and easily attracted with best particles position at each iteration using equation (2).
individual. This problem usually can cause the particle = + 1 1( − ) + 2 2( − ) (2)
trapped and converge earlier into local optimal. where and denote the particles current velocity
A hybrid PSO on dynamic clustering (DC-HPSO) for and position. While represent the particles best
global optimization was introduced by Hongru et al. in order fitness found in time being, and the is the best
to balance the local exploitation and global exploration [33]. position found throughout entire population. There
In DC-HPSO, all of the particles are dynamically cluster are certain parameters that are important to ensure the
into various sub-group to push the dominating particles velocity of the particle is correct. The parameters are
closer to a potential local optimum. It also applied the , 1 & 2. ‘ ’ is the inertia weight. In order to control
standard PSO to help non-dominant particles to fly away the direction of the particles, the value of should less
from the dominant particle. Therefore, non-dominant than 1. Besides, the best ranges value of w is between
particles will emerge from the cluster to search for a new [0.1, 0.9]. Meanwhile, the weight of the random
local minimum. From a set of new local minima, a new acceleration component that pulls particle to the local
global optimum can be found through a new cluster. and global best is represent by both acceleration
However, the experimental results show that DC-HPSO coefficients 1 and 2 [21]. They are generally selected
technique can speed up the convergence of the particles in the range [1, 2]. Moreover, 1& 2 is the random
compared with other techniques. Thus, this problem may value that uniformly distributed in range of [0, 1].

650

Authorized licensed use limited to: UNIVERSITY TEKNOLOGI MALAYSIA. Downloaded on June 01,2022 at 03:16:39 UTC from IEEE Xplore. Restrictions apply.
Lastly, the positions of all the particles are updated C. Global best local neighbourhood Particle Swarm
according to equation (3). Optimization (GbLN-PSO)
= + (3) Global best local neighborhood in PSO (GbLN-PSO) is
The particles position update based on the another enhancement PSO method introduced by Zalili et.
combination of the current position and the velocity of al, in 2016 [10], [29]–[31]. GbLN-PSO algorithm still
the particle. The process is repeated until the stopping applied the three-basic step in conventional PSO: 1)
condition (number of iterations) for the algorithm is Initialize the particle location, 2) the evaluation of the
meet. particles fitness value to determine the local best and global
B. Differential Evaluation Particle Swarm Optimization best, and 3) updating the particle position using velocity.
(DEPSO) However, GbLN-PSO made some enhancement in step
2 by generating and evaluate the local neighborhood
In 2003, Zhang and Xie introduced a hybrid of
population of each particle along with the particles
Differential Evaluation (DE) and PSO which is Differential
movement to the global best. The aim of this enhancement
Evolution Particle Swarm Optimization (DEPSO) [28]. The
is to find the new potential solution (new local best) among
aim of this hybrid is to produce the random particles
the local neighborhood population. Therefore, every
mutation using two different evolutionary algorithms. The
particle is compulsory to generate small population for its
mutation process aims at increasing population diversity
neighborhood using this equation (6).
and the algorithms ability to escape from local minimal. ′
= (0, ) + (6)
From the previous result, DEPSO shows the best practices
where ′ holds the location of particle local neighborhood.
of both the algorithms and thus helps reduce the design time.
(0, ) is the random value between 0 to , where = −
There are four basic steps in DEPSO algorithm: 1)
. Which, is the location of the particle and is the
Initialize the particle location, 2) the evaluation of the
location of local best . Figure 1 illustrate the process to
particles fitness value to determine the local best and global
generate local neighborhood population along the
best, 3) updating the particle position using velocity, and 4)
movement to global best.
the particles mutation using DE algorithm.
Generally, PSO algorithm is used to produce and update Before moving After moving Global best
the particle location in the population. Meanwhile, DE
algorithm applies to create a new particle (offspring) based
on the mutation of the selected particles. Therefore, these
are the step of particles mutation by DE algorithm.
1. The global best particle has been taken as the parent.
Meanwhile, the local best particles are chosen randomly Local neighborhood
from the population to calculate the particles’ error. The
particles’ error is important to find the different value of Figure 1 Process to generate local neighborhood population.
the particle and been used to create a new offspring. After the population of local neighborhood is generate,
Furthermore, the minimum number of selected particles the comparison of fitness is made between the local optimal
should be more than 2 and in even number. and their local neighborhood to find a new local best.
2. Calculate the error between selected particles. The Besides, if the fitness value of the neighborhood is smaller
particles’ error equation is determining as (4). than local best ( ′ < ), the position and fitness of the
∑ ā neighborhood are assigned as a new local best. Then, the
δN = (4)
process is continuing by evaluating the global best.
where is the particles error and ā is the difference of
two local best particles that chosen from the population C. Midrange Exploration Exploitation Searching
that all the current local best. While is the number of Particle Swarm Optimization (MEESPSO)
ā involved. The ā is calculated as ā = ( 1 − 2), where 1, Based on the problem discuss in section 2, we proposed
2 are the chosen local best particle from the local best a new enhancement of PSO algorithm called Midrange
population. The particle error calculated is used to Exploration Exploitation Searching Particle Swarm
mutate the parent and create an offspring. The mutation Optimization (MEESPSO). The main objective of this
takes place according to (5). improvement is to enhance the searching ability of weak
′= + (5) particles in finding the best solution. In this enhancement
where ′ is the offspring, is the global best position of algorithm, we still applied all three-basic step of
the parent and is the particles’ error. conventional PSO algorithm. However, we added another
3. Evaluate the fitness value of the offspring. If the process in MEESPSO which is particle mutation. Therefore,
offspring has better fitness than the parent, the offspring there are four step in MEESPSO algorithm: 1) Initialize the
will replace the parent position. Other, the parent particle location, 2) the evaluation of the particles fitness
position is retaining for the next iteration. value to determine the local best and global best, 3) the
The position of each particle will be updated based on the particle mutation, and 4) updating the particle position using
new global best result. All the four basic steps in DEPSO velocity. These are the step of particles mutation:
algorithm are repeated until meeting the stopping condition. 1. Evaluate the particles fitness. The aim of this evaluation
process is to determine the best particle (minimum
fitness) and the weak particle (maximum fitness).

651

Authorized licensed use limited to: UNIVERSITY TEKNOLOGI MALAYSIA. Downloaded on June 01,2022 at 03:16:39 UTC from IEEE Xplore. Restrictions apply.
2. Mutate the best particle and the worse particle to Based on the result, in small range 30x30, PSO,
produce an average fitness. The mutation equation is DEPSO, GbLN-PSO and MEESPSO found the quality
determining as (7). solution and converge at the earliest iteration. This show
= ( + )/2 (7) when the dataset is smaller, the possibility of particles to
where is the average fitness value. Meanwhile, is explore and find the best solution in the search area is almost
the minimum fitness value of the particle and is the possible.
maximum fitness value. Similar to range 40x40, all these four algorithms were
3. Evaluate the fitness value of the particle. In this step, if mange to find the quality solution. However, the numbers
the fitness of the particles location ( ) is smaller than of iteration to converge on quality solution were slightly
the average fitness value ( < ), they retain closer to different. Based on 30 iterations of 100 particles, GbLN-
the global best. Meanwhile, the particles fitness that PSO and DEPSO have been used a lesser number of
larger than average fitness value ( > ) must relocate iterations to converge on quality solution. While PSO and
from the current location to a new location. MEESPSO required more iteration to find that solution. In
Furthermore, the particles location is relocated as the addition, the result of quality solution with a smaller number
equation below (8): of iterations is the best approach to reduce the computation
′= ( , ) (8) time in an optimization problem.
where ′ is the new random location of particles Meanwhile, in large range 50x50, only GbLN-PSO
around search space ( , ). This step is important to give and MEESPSO could converge at the quality solution. As
the particles an opportunity to explore for a better we can see in Figure 3, both GbLN-PSO and MEESPSO
solution around search space. converge at the earliest iteration which less than 14. This
show GbLN-PSO and MEESPSO were better in finding the
IV. PARAMETER SETUP quality solution in large dataset compared to DEPSO and
In this experiment, we tested 150 datasets with the 3 PSO.
different ranges, 10 different size of particle and 5 different The overall result indicates, GbLN-PSO and
size of iteration. Therefore, each range will have 50 datasets. MEESPSO were the most optimize algorithms in finding
The parameters and holds the location of particles in the the quality solution in three different ranges. Furthermore,
range of 30x30, 40x40 and 50x50. The size of particle both algorithms could converge on the quality solution and
={10,20,30,40,50,60,70,80,90,100}, and r1 and r2 are the avoid from trapped in a local minimum.
random value between [0,1]. The acceleration coefficients
c1 and c2 are set to 1.5. The weight is set from =0.1 until II. Optimum value
= 0.9. Furthermore, the processing of these algorithms The optimal value means the minimum or maximum
was terminated after the size of iteration, t is equal to value of the objective function around the searching space
{30,40,50,60,100}. The proposed method is analyzed using in an optimization problem [27]. In these studies, the
Matlab 2014 with a C-compiler and executed on a Dell minimum value was selected as an optimum value. The
Intel(R) Core (TM) i7-2600 CPU @ 3.40 GHz. particles will search for these values and find it through an
iteration. Thus, the objective of this study is to see the
V. RESULT & DISCUSSION performance of these four algorithms in finding the
The performance of PSO, DEPSO, GbLN-PSO and optimum value in the different ranges or search space. Table
MEESPSO were evaluated in terms of convergence, 1 shows the comparison result for optimum value and error
optimum value, and consistency. rate.
Table 1 Comparison result
I. Convergence
Convergence occurs when the particles are moving and Optimum
Range Algorithms Error Rate
getting closer and closer to some specific value. This Value
condition will cause the particle velocity drop to zero and PSO 4 19
remain unchanged until the end of iteration numbers. Figure
2 shows the example of the convergence result (Particle DEPSO 20 3
30x30
100, Iteration 30, range 50x50) in this study. GbLN-PSO 15 38
MEESPSO 39 39
PSO 3 6
DEPSO 3 16
40x40
GbLN-PSO 11 8
MEESPSO 27 26
PSO 5 3
DEPSO 11 9
50x50
GbLN-PSO 23 16
Figure 2 Convergence Result MEESPSO 41 41

652

Authorized licensed use limited to: UNIVERSITY TEKNOLOGI MALAYSIA. Downloaded on June 01,2022 at 03:16:39 UTC from IEEE Xplore. Restrictions apply.
Based on the result in table 1, in range 30x30, algorithm produces the highest error rate value compared to
MEESPSO manage to find the most optimum value other algorithms. Moreover, PSO is unsuitable for solving
followed by DEPSO GbLN-PSO, and PSO. When the problems especially in large range as it is easily trapped into
ranges were expending to 40x40 and 50x50. Every local optima. Thus, trapped particle is main cause of high
algorithm are having difficulty in finding the optimal value error rates.
due to larger searching space. Even facing the difficulty,
MEESPSO has found the optimum values for 27 datasets. VI. CONCLUSION
In which MEESPSO performing significantly better than The uses of hybrid optimization techniques involve the best
GbLN-PSO, DEPSO and PSO in the 50x50 range. While practices of all the enhancement method. The new
the GbLN-PSO shows a slight increase of optimum values enhancement method applied and hybrid the mutation of
in large area space compared to the DEPSO and PSO. particle into the conventional PSO process as a mechanism
Based on the overall result, it is found that the PSO less in exploration. Meanwhile, conventional PSO was used to
optimum in finding the optimum value in large search space exploit the particles and avoided it from excessive
and easily trapped at local optima. Moreover, PSO required exploration. Therefore, this enhancement was aimed to
many particles in order to find the optimum solution. balance both exploration and exploitation process.
Meanwhile, for these studies, MEESPSO and GbLN-PSO Moreover, the fitness is significantly improved because the
provide the most optimum solution by finding the optimum enhancement method prevents the particles from being
value in small and large search space. This show MEESPSO trapped in local minima, thus guiding them towards the
and GbLN-PSO were talented to increase level accuracy in global solution. MEESPSO also has the properties of self-
finding the optimum solution. adaption by enhancing poor particles performance. Self-
adaptation is important in order to prevent the particle from
III. Consistency depending too much on the current best result, which can
Consistency means the accurate number of searching discourage the ability of the particle to search for new local
results produce across different optimization problem. In optimal. MEESPSO enhancement method also can avoid
this studies, 30 simulation of the same parameter were weak particles from being ignore and depend more on
tested to see the ability of the algorithms in finding the current best result. In addition, this new process will help
optimum value. the weak particles to explore through search space. From
Based on the result, GbLN-PSO is most consistent this works, MEESPSO, GbLN-PSO and DEPSO perform
algorithm in finding the optimum value across three better than conventional PSO in term of number of particle
different ranges. As we know, different ranges are one of and iteration, consistency, convergence, optimum value,
the dynamic environment problems that involved with the and error rate. Furthermore, the result also shows that
changing of surrounding [28]. Through this study, it is MEESPSO have high intendancy in searching due to the
found that GbLN-PSO was the most accurate in producing ability of particles to explore and exploit in the space area.
the optimum searching result in dynamic environment This experiment is conducted based on the specific function.
problem. In future, the experiment will be applied in real-world
Besides MEESPSO also consistent in finding the application to verify the efficiency of our enhancement
optimum value compared to DEPSO and PSO. According algorithm.
to the result, MEESPSO shows a slight increase of
consistency result in the small range 30x30. The consistency ACKNOWLEDGMENT
result of MEESPSO in the small range was a slightly better The authors are thankful to the anonymous referees for their
than GbLN-PSO. This show MEESPSO is more consistent useful comments and suggestions that contributed to the
and able to find the solution in small ranges. improvement of this work. This work is supported by
Universiti Malaysia Pahang under Postgraduate Research
IV. Error Rate Grants Scheme PGRS190392.
The error between the actual result and the optimum
results was estimated in order to know the distance between REFERENCES
the actual result with the targeted result [29]. Thus, the error [1] A. Anand and L. Suganthi, “Hybrid GA-PSO optimization of
rate, ℯ is calculated as (9): Artificial Neural Network for forecasting electricity demand,”
Energies, 2018, doi: 10.3390/en11040728.
ℯ= – (9)
[2] F. Marini and B. Walczak, “Particle swarm optimization (PSO).
where is the actual result produced by the algorithms A tutorial,” Chemom. Intell. Lab. Syst., 2015, doi:
and is the optimum value in searching space. 10.1016/j.chemolab.2015.08.020.
From table 1, we can see MEESPSO shows a better error [3] H. Moayedi, M. Raftari, A. Sharifi, W. A. W. Jusoh, and A. S.
rate result compared to GbLN-PSO, DEPSO and PSO. This A. Rashid, “Optimization of ANFIS with GA and PSO
estimating α ratio in driven piles,” Eng. Comput., 2020, doi:
show MEESPSO actual result are more closeness to the 10.1007/s00366-018-00694-w.
optimum value and cause the error rate to become zero. [4] N. Ghorbani, A. Kasaeian, A. Toopshekan, L. Bahrami, and A.
Therefore, MEESPSO was capable to produce zero error Maghami, “Optimizing a hybrid wind-PV-battery system using
rate value for both small and large ranges. This shows GA-PSO and MOPSO for reducing cost and increasing
MEESPSO has capability to increase level accuracy because reliability,” Energy, 2018, doi: 10.1016/j.energy.2017.12.057.
of its manage to find the optimum value and produce a [5] A. K. Mishra, S. R. Das, P. K. Ray, R. K. Mallick, A. Mohanty,
and D. K. Mishra, “PSO-GWO Optimized Fractional Order PID
smaller error rate in the experiments. Based Hybrid Shunt Active Power Filter for Power Quality
Besides GbLN-PSO and DEPSO were also produce a Improvements,” IEEE Access, 2020, doi:
smaller error rate compared to PSO. In these studies, PSO 10.1109/ACCESS.2020.2988611.

653

Authorized licensed use limited to: UNIVERSITY TEKNOLOGI MALAYSIA. Downloaded on June 01,2022 at 03:16:39 UTC from IEEE Xplore. Restrictions apply.
[6] L. T. Le, H. Nguyen, J. Zhou, J. Dou, and H. Moayedi, Adv. Robot. Syst., 2017, doi: 10.1177/1729881417710312.
“Estimating the heating load of buildings for smart city planning [24] R. Misra and K. S. Ray, “Object Tracking based on Quantum
using a novel artificial intelligence technique PSO-XGBoost,” Particle Swarm Optimization,” 2017 9th Int. Conf. Adv. Pattern
Appl. Sci., 2019, doi: 10.3390/APP9132714. Recognition, ICAPR 2017, pp. 292–297, 2018, doi:
[7] F. Rezaei and H. R. Safavi, “GuASPSO: a new approach to hold 10.1109/ICAPR.2017.8593075.
a better exploration–exploitation balance in PSO algorithm,” Soft [25] S. Qian, H. Wu, and G. Xu, “An improved particle swarm
Comput., 2020, doi: 10.1007/s00500-019-04240-8. optimization with clone selection principle for dynamic
[8] X. Chen and Y. Li, “A modified PSO structure resulting in high economic emission dispatch,” Soft Comput., vol. 24, no. 20, pp.
exploration ability with convergence guaranteed,” IEEE Trans. 15249–15271, 2020, doi: 10.1007/s00500-020-04861-4.
Syst. Man, Cybern. Part B Cybern., 2007, doi: [26] M. Gajula, “Object Tracking using Orthogonal Learning Particle
10.1109/TSMCB.2007.897922. Swarm Optimization ( OLPSO ),” vol. 13, no. 1, pp. 252–261,
[9] T. Liu, L. Li, G. Shao, X. Wu, and M. Huang, “A novel policy 2018.
gradient algorithm with PSO-based parameter exploration for [27] P. P. Dash and D. Patra, “Mutation based self regulating and self
continuous control,” Eng. Appl. Artif. Intell., 2020, doi: perception particle swarm optimization for efficient object
10.1016/j.engappai.2020.103525. tracking in a video,” Meas. J. Int. Meas. Confed., vol. 144, pp.
[10] Z. Musa, N. I. H. Fauzi, M. H. B. M. Hassin, M. N. M. Kahar, 311–327, 2019, doi: 10.1016/j.measurement.2019.05.030.
and J. Watada, “Global Best Local Neighborhood in Particle [28] B. Luitel and G. K. Venayagamoorthy, “Differential Evolution
Swarm Optimization in Dynamic Environment,” Adv. Sci. Lett., Particle Swarm Optimization for Digital Filter Design,” vol. 2,
2018, doi: 10.1166/asl.2018.12984. no. 3, pp. 3954–3961, 2008.
[11] H. C. Wang and C. T. Yang, “Enhanced particle swarm [29] I. Express, L. Volume, and I. International, “VIDEO
optimization with self-adaptation on entropy-based inertia TRACKING SYSTEM: A SURVEY Zalili Musa and Junzo
weight,” IEICE Trans. Inf. Syst., 2016, doi: Watada,” Area, 2008.
10.1587/transinf.2015EDP7304.
[30] Z. Musa, M. Z. Salleh, R. A. Bakar, and J. Watada, “GbLN-PSO
[12] H. C. Wang and C. T. Yang, “Enhanced Particle Swarm and model-based particle filter approach for tracking human
Optimization With Self-Adaptation Based On Fitness-Weighted movements in large view cases,” IEEE Trans. Circuits Syst.
Acceleration Coefficients,” Intell. Autom. Soft Comput., 2016, Video Technol., vol. 26, no. 8, pp. 1433–1446, 2016, doi:
doi: 10.1080/10798587.2015.1057956. 10.1109/TCSVT.2015.2433172.
[13] C. Li, S. Yang, and T. T. Nguyen, “A self-learning particle [31] M. Shahkhir Mozamir, R. Binti Abu Bakar, W. Isni Soffiah Wan
swarm optimizer for global optimization problems,” IEEE Trans. Din, and Z. Musa, “GbLN-PSO Algorithm for Indoor
Syst. Man, Cybern. Part B Cybern., 2012, doi: Localization in Wireless Sensor Network,” in IOP Conference
10.1109/TSMCB.2011.2171946. Series: Materials Science and Engineering, 2020, doi:
[14] L. Skanderova, T. Fabian, and I. Zelinka, “Self-adapting self- 10.1088/1757-899X/769/1/012033.
organizing migrating algorithm,” Swarm Evol. Comput., 2019, [32] H. Garg, “A hybrid PSO-GA algorithm for constrained
doi: 10.1016/j.swevo.2019.100593. optimization problems,” Appl. Math. Comput., vol. 274, pp. 292–
[15] A. Sahu, S. K. Panigrahi, and S. Pattnaik, “Fast Convergence 305, 2016, doi: 10.1016/j.amc.2015.11.001.
Particle Swarm Optimization for Functions Optimization,” [33] L. Hongru, H. Jinxing, and J. Shouyong, “A Hybrid PSO Based
Procedia Technol., vol. 4, pp. 319–324, 2012, doi: on Dynamic Clustering for Global Optimization,” 2018, doi:
10.1016/j.protcy.2012.05.048. 10.1016/j.ifacol.2018.09.311.
[16] İ. B. Aydilek, “A hybrid firefly and particle swarm optimization [34] K. Chaitanya, D. V. L. N. Somayajulu, and P. R. Krishna,
algorithm for computationally expensive numerical problems,” “Memory-based approaches for eliminating premature
Appl. Soft Comput. J., 2018, doi: 10.1016/j.asoc.2018.02.025. convergence in particle swarm optimization,” Appl. Intell., 2021,
[17] Q. Xu, “Modified particle swarm optimization algorithm and its doi: 10.1007/s10489-020-02045-z.
application in neural network,” in Journal of Physics: [35] B. Nakisa, M. Z. A. Nazri, M. N. Rastgoo, and S. Abdullah, “A
Conference Series, 2020, doi: 10.1088/1742- survey: Particle swarm optimization based algorithms to solve
6596/1682/1/012015. premature convergence problem,” J. Comput. Sci., 2014, doi:
[18] T. Kumaresan, P. Subramanian, D. S. Alex, M. I. T. Hussan, and 10.3844/jcssp.2014.1758.1765.
B. Stalin, “Email image spam detection using fast support vector [36] G. Xu, Z. H. Wu, and M. Z. Jiang, “Premature convergence of
machine and fast convergence particle swarm optimization,” Int. standard particle swarm optimisation algorithm based on Markov
J. Recent Technol. Eng., 2019. chain analysis,” Int. J. Wirel. Mob. Comput., vol. 9, no. 4, pp.
[19] S. Chatterjee, D. Goswami, S. Mukherjee, and S. Das, 377–382, 2015, doi: 10.1504/IJWMC.2015.074034.
“Behavioral analysis of the leader particle during stagnation in a [37] R. B. Larsen, J. Jouffroy, and B. Lassen, “On the premature
particle swarm optimization algorithm,” Inf. Sci. (Ny)., 2014, convergence of particle swarm optimization,” in 2016 European
doi: 10.1016/j.ins.2014.03.098. Control Conference, ECC 2016, 2017, doi:
[20] B. Tang, K. Xiang, M. Pang, and Z. Zhanxia, “Multi-robot path 10.1109/ECC.2016.7810572.
planning using an improved self-adaptive particle swarm [38] A. Alshahrani, N. M. Namazi, M. Abdouli, and M. A. Alqarni,
optimization,” Int. J. Adv. Robot. Syst., 2020, doi: “Escaping the local optima trap caused by PSO by hybridization
10.1177/1729881420936154. scheme for elongate the WSN’s lifetime,” in 2017 8th IEEE
[21] X. Qi, G. Ju, and S. Xu, “Efficient solution to the stagnation Annual Information Technology, Electronics and Mobile
problem of the particle swarm optimization algorithm for phase Communication Conference, IEMCON 2017, 2017, doi:
diversity,” Appl. Opt., vol. 57, no. 11, p. 2747, 2018, doi: 10.1109/IEMCON.2017.8117166.
10.1364/ao.57.002747. [39] W. Liu, Z. Wang, N. Zeng, Y. Yuan, F. E. Alsaadi, and X. Liu,
[22] M. R. Bonyadi and Z. Michalewicz, “Stability analysis of the “A novel randomised particle swarm optimizer,” Int. J. Mach.
particle swarm optimization without stagnation assumption,” Learn. Cybern., 2021, doi: 10.1007/s13042-020-01186-4.
IEEE Trans. Evol. Comput., 2016, doi: [40] S. Mostafa Bozorgi and S. Yazdani, “IWOA: An improved
10.1109/TEVC.2015.2508101. whale optimization algorithm for optimization problems,” J.
[23] Z. Zhu, B. Tang, and J. Yuan, “Multirobot task allocation based Comput. Des. Eng., 2019, doi: 10.1016/j.jcde.2019.02.002.
on an improved particle swarm optimization approach,” Int. J.

654

Authorized licensed use limited to: UNIVERSITY TEKNOLOGI MALAYSIA. Downloaded on June 01,2022 at 03:16:39 UTC from IEEE Xplore. Restrictions apply.

You might also like