0% found this document useful (0 votes)
6 views

2007 - Two-Level of Nondominated Solutions Approach To Multiobjective Particle Swarm Optimization

Uploaded by

Adamo Oliveira
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

2007 - Two-Level of Nondominated Solutions Approach To Multiobjective Particle Swarm Optimization

Uploaded by

Adamo Oliveira
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

Two-Level of Nondominated Solutions Approach

to Multiobjective Particle Swarm Optimization


M. A. Abido
Electrical Engineering Department
King Fahd University of Petroleum and Minerals
Dhahran 31261, Saudi Arabia
[email protected]

ABSTRACT adapt the global and local exploration abilities. It usually results
In multiobjective particle swarm optimization (MOPSO) methods, in faster convergence rates than the Genetic Algorithms [6]. The
selecting the local best and the global best for each particle of the PSO has been applied to different single objective optimization
population has a great impact on the convergence and diversity of problems with impressive success [7-11]. Although PSO's
solutions, especially when optimizing problems with high number performance, in single-objective optimization tasks, has been
of objectives. This paper presents a two-level of nondominated extensively studied, a little work has been done for multiobjective
solutions approach to MOPSO. The ability of the proposed optimization problems thus far.
approach to detect the true Pareto optimal solutions and capture Recently, investigators are paying more and more interest on PSO
the shape of the Pareto front is evaluated through experiments on to solve multi-objective problems as discussed in the following
well-known non-trivial test problems. The diversity of the section. Changing a PSO to a multi-objective PSO (MOPSO)
nondominated solutions obtained is demonstrated through requires redefinition of global and local best individuals in order
different measures. The proposed approach has been assessed to obtain a front of optimal solutions in MOPSO. In
through a comparative study with the reported results in the multiobjective particle swarm optimization, there is no absolute
literature. global best, but rather a set of nondominated solutions. In
addition, there may be no single local best individual for each
Categories and Subject Descriptors particle of the swarm. Choosing the global best and local best to
I.2.8 [Artificial Intelligence]: Problem Solving, Control guide the swarm particles becomes nontrivial task in
Methods, and Search – heuristic methods. multiobjective domain. Some attempts have been done in the
literature to select the best guides for the particles in MOPSO.
General Terms Parsopoulos and Vrahatis [12] presented a first study of the
Algorithms, Experimentation, Performance, Verification. performance of the PSO in multiobjective optimization problems.
The performance of the PSO in terms of finding Pareto front in
Keywords weighted aggregation cases was presented. A vector evaluated
Particle Swarm Optimization, multiobjective optimization, PSO (VEPSO) based on the concept of the vector evaluated
Pareto-optimal set, nondominated solutions. genetic algorithm (VEGA) [13] was proposed and examined to
perform multiobjective optimization. However, selection of
individuals that excel in one objective without looking to the
1. INTRODUCTION other objectives implies a problem of killing the middling
Evolutionary Algorithms (EA) are good candidate to
performance individuals that can be very useful for compromise
multiobjective optimization problems due to their abilities, to
solutions [14].
search simultaneously for multiple Pareto optimal solutions and,
perform better global search of the search space [1]. The Particle Hu and Eberhart [15] presented a MOPSO that uses dynamic
Swarm Optimization (PSO) is a swarm intelligence method that neighborhood strategy to obtain the best local guide for each
models social behavior to guide swarms of particles towards the particle in biobjective optimization problems. However, selecting
most promising regions of the search space [2-5]. Generally, PSO the best local guides based on one of the objectives degrades the
is characterized as simple in concept, easy to implement, and algorithm performance as one-dimensional optimization is used to
computationally efficient. Unlike the other heuristic techniques, deal with multiple objectives. In addition, selecting the fixed
PSO has a flexible and well-balanced mechanism to enhance and objective needs a priori knowledge about the objective functions.
Furthermore, choosing the number of neighbors considered, the
objective function to optimize and the one to be fixed, and
Permission to make digital or hard copies of all or part of this work for extending the algorithm to higher dimensional objective spaces
personal or classroom use is granted without fee provided that copies are are very much involved questions.
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy Coello and Lechuga [16] proposed a MOPSO where the objective
otherwise, or republish, to post on servers or to redistribute to lists, space is divided to hypercubes before selecting the best local
requires prior specific permission and/or a fee. guide for each particle in the population. However, this method
GECCO’07, July 7–11, 2007, London, England, United Kingdom. biases the selection toward under-represented areas of the
Copyright 2007 ACM 978-1-59593-697-4/07/0007…$5.00.

726
estimated Pareto front and only one local best solution is drawback of increasing drastically the computational burden. On
maintained for each particle. In addition, the random selection of the other hand, a comprehensive survey of the state-of-the-art in
the best local guide affects its quality. However, The proposed multiobjective particle swarm optimizers can be found in [30]
method is compared with the Pareto Archived Evolutionary where different techniques reported in MOPSO development have
Strategy (PAES) and Nondominated Sorting Genetic Algorithms been categorized and discussed.
II (NSGA-II) with promising results.
In this paper, a new MOPSO technique based on two-level of
Fieldsend and Singh [17] proposed a MOPSO which uses a nondominated solutions is proposed. The ability of the proposed
dominated tree for the choice of the best local guide for each approach to detect the true Pareto optimal solutions and capture
particle in the population. The presented method has been tested the shape of the Pareto front is studied through experiments on
on four biobjective test problems with promising results in three well-known non-trivial test problems.
test problems compared with PAES and a competing MOPSO
presented in [16]. However, the performance of the presented
2. MULTIOBJECTIVE OPTIMIZATION
MOPSO in [16] and in [17] was very poor in the multifrontal
Many real-world problems involve simultaneous optimization of
fourth test problem with multimodality [18]. In addition, the
several objective functions. Generally, these functions are non-
algorithm has not been validated on higher dimensional objective
commensurable and often conflicting objectives. Multiobjective
spaces and might lose the most suitable solutions.
optimization with such conflicting objective functions gives rise
Mostaghim and Teich [19] proposed a sigma method in which the to a set of optimal solutions, instead of one optimal solution. The
best local guides for each particle are adopted to improve the reason for the optimality of many solutions is that no one can be
convergence and diversity of a PSO approach used for considered to be better than any other with respect to all objective
multiobjective optimization. They also use a "turbulence" functions. These optimal solutions are known as Pareto-optimal
operator, but applied on decision variable space. The use of the solutions.
sigma values increases the selection pressure of PSO which was
A general multiobjective optimization problem consists of a
already high. This may cause premature convergence in some
number of objectives to be optimized simultaneously and is
cases e.g., in multifrontal problems. Comparisons with the
associated with a number of equality and inequality constraints. It
strength Pareto evolutionary algorithm 2 (SPEA2) [20] and the
can be formulated as follows:
dominated trees of [17] using four test problems and the coverage
metric [21].
Minimize f i ( x ) i = 1,..., N obj , (1)
Hu et al. [22] adopted a secondary population called extended
memory and introduced some further improvements to their ⎧ g j ( x ) = 0 j = 1, ..., M ,
S ubject to : ⎨ (2)
dynamic neighborhood PSO approach presented in [15].
Nevertheless, it is worth indicating that this approach completely
⎩ h k ( x ) ≤ 0 k = 1, ..., K ,
fails in generating the true Pareto front of some problems. In where fi is the ith objective function, x is a decision vector that
addition, the presented algorithm has been compared to the SPEA represents a solution, and Nobj is the number of objectives.
[23] using the set coverage metric [21].
For a multiobjective optimization problem, any two solutions x1
Li [24] proposed an approach in which the main mechanisms of and x2 can have one of two possibilities- one dominates the other
the NSGA-II [25] are adopted in a PSO algorithm. The proposed or none dominates the other. In a minimization problem, without
approach showed a very competitive performance with respect to loss of generality, a solution x1 dominates x2 iff the following two
the NSGA-II even outperforming it in some cases. conditions are satisfied:
Lu [26] presented the dynamic population strategy assisted PSO.
This method can evolve to an approximately optimal population
size while the population is approaching the true Pareto front. ∀i ∈{1, 2, ..., Nobj }: fi ( x1 ) ≤ fi ( x2 ) , (3)
However, this algorithm suffers from some difficulties in finding
a well-approximated Pareto optimal front [27]. ∃j ∈ {1, 2, ..., N obj } : f j ( x1 ) < f j ( x2 ) . (4)
Mostaghim and Teich [28] proposed a new method uses the
property of moving particles in MOPSO and divides the
population of the covering MOPSO into subswarms. The If any of the above conditions is violated, the solution x1 does not
subswarms try to cover the gaps between the nondominated dominate the solution x2. If x1 dominates the solution x2, x1 is
solutions found in the initial run. The proposed covering method called the nondominated solution within the set {x1, x2}. The
is tested on different test problems. solutions that are nondominated within the entire search space are
In most recent work in the field of MOPSO, different approaches denoted as Pareto-optimal and constitute the Pareto-optimal set
have been introduced to identify the local best solution based on or Pareto-optimal front.
memorizing all the nondominated solutions visited by a particle
and selecting the local best among them [29]. These approaches 3. PROPOSED APPROACH TO MOPSO
have been implemented on some standard test problems where the
results show that keeping the particle archive improves 3.1 Overview
significantly the effectiveness of the technique. However, keeping In multiobjective particle swarm optimization, a set of
all the nondominated solutions visited by a particle has a nondominated solutions must replace the single global best

727
individual in the standard single objective PSO case. In v kmax = ( x kmax − x kmin ) / N (5)
addition, there may be no single local best individual for
each particle of the swarm. Choosing the global best and where N is a selected number of intervals.
local best to guide the swarm particles becomes nontrivial task in
multiobjective domain. This paper presents two-level of Each particle in the initial population is evaluated using the
nondominated solutions approach to address these problems. In objective functions. For each particle, set Sj*(0)={Xj(0)} and the
the proposed approach, elitism is also considered by copying any local best Xj*(0)=Xj(0), j=1, …, n. Search for the nondominated
nondominated solution obtained to an external set in order to keep solutions and form the nondominated global set S**(0). The
the new nondominated solutions obtained during generations. The nearest member in S**(0) to Xj*(0) is selected as the global best
external set is updated regularly to hold only the nondominated Xj**(0) of the jth particle. Set the external set equal to S**(0). Set
solutions. The basic definitions and the major steps of the the initial value of the inertia weight w(0).
proposed approach can be explained as follows. Step 2 (Time updating): Update the time counter t=t+1.
Step 3 (Weight updating): Update the inertia weight.
3.2 Proposed MOPSO Algorithm
The major elements of the proposed MOPSO technique are Step 4 (Velocity updating): Using the local best Xj*(t) and the
briefly defined as follows: - global best Xj**(t) of each particle, j=1, …, n, the jth particle
velocity in the kth dimension is updated according to the following
Nondominated local set, Sj*(t),: It is a set that stores the
equation:
nondominated solutions obtained by the jth particle up to the
current time. As the jth particle moves through the search space,
its new position is added to this set and the set is updated to keep
only the nondominated solutions. An average linkage based v j ,k (t ) = w (t ) v j ,k (t − 1) + c1r1 (x *j ,k (t − 1) − x j ,k (t − 1))
(6)
hierarchical clustering algorithm [31] used by SPEA [32] is
employed to reduce the nondominated local set size if it exceeds a
+ c 2 r2 (x **j ,k (t − 1) − x j ,k (t − 1))
certain prespecified value.
Nondominated global set, S**(t),: It is a set that stores the
where c1 and c2 are positive constants and r1 and r2 are uniformly
nondominated solutions obtained by all particle up to the current
distributed random numbers in [0,1]. If a particle violates the
time. First, the union of all nondominated local sets is formed.
velocity limits, set its velocity equal to the proper limit.
Then, the nondominated solutions out of this union are members
in the nondominated global set. An average linkage based Step 5 (Position updating): Based on the updated velocities, each
hierarchical clustering algorithm is employed to reduce the particle changes its position according to the following equation.
nondominated global set to a manageable size.
Local best, Xj*(t), and Global best, Xj**(t),: The individual
distances between members in nondominated local set of the jth x j , k (t ) = v j , k (t ) + x j , k (t − 1) (7)
particle, Sj*(t), and members in nondominated global set, S**(t),
are measured in the objective space. If Xj*(t) and Xj**(t) are the
members of Sj*(t) and S**(t) respectively that give the minimum
distance, they are selected as the local best and the global best of If a particle violates its position limits in any dimension, set its
the jth particle respectively. position at the proper limit.

In the proposed MOPSO algorithm, the population has n particles Step 6 (Nondominated local set updating): The updated position
and each particle is an m-dimensional vector, where m is the of the jth particle is added to Sj*(t). The dominated solutions in
number of optimized parameters. The computational flow of the Sj*(t) will be truncated and the set will be updated accordingly. If
proposed MOPSO technique can be described in the following the size of Sj*(t) exceeds a prespecified value, the hierarchical
steps. clustering algorithm will be invoked to reduce the size to its
maximum limit.
Step 1 (Initialization): Set the time counter t=0 and generate
randomly n particles, {Xj(0), j=1, …, n}, where Xj(0)=[xj,1(0), …, Step 7 (Nondominated global set updating): The union of all
xj,m(0)]. xj,k(0) is generated by randomly selecting a value with nondominated local sets is formed and the nondominated
uniform probability over the kth optimized parameter search space solutions out of this union are members in the nondominated
[xkmin , xkmax]. Similarly, generate randomly initial velocities of all global set S**(t). The size of this set will be reduced by
particles, {Vj(0), j=1, …, n}, where Vj(0)=[vj,1(0), …, vj,m(0)]. hierarchical clustering algorithm if it exceeds a prespecified
vj,k(0) is generated by randomly selecting a value with uniform value.
probability over the kth dimension [-vkmax , vkmax] where the Step 8 (External set updating): The external Pareto-optimal set is
particle velocity in the kth dimension is limited by some maximum updated as follows.
value, vkmax. This limit enhances the local exploration of the
problem space and it realistically simulates the incremental Copy the members of S**(t) to the external Pareto set.
changes of human learning. To ensure uniform velocity through
Search the external Pareto set for the nondominated individuals
all dimensions, the maximum velocity in the kth dimension is
and remove all dominated solutions from the set.
proposed as:

728
If the number of the individuals externally stored in the Pareto set f 1 (x 1 , x 2 ) = 2 + (x 1 − 2) 2 + (x 2 − 1) 2
exceeds the maximum size, reduce the set by means of clustering. (11)
Step 9 (Local best and global best updating): The individual − 10(225 − x 12 − x 22 )
distances between members in Sj*(t), and members in S**(t), are f 2 (x 1 , x 2 ) = 9x 1 + (x 2 − 1) 2
measured in the objective space. If Xj*(t) and Xj**(t) are the (12)
members of Sj*(t) and S**(t) respectively that give the minimum − 10(3x 2 − x 1 − 10)
distance, they are selected as the local best and the global best of
the jth particle respectively. c1 = 225 − x 12 − x 22 (13)
Step 10 (Stopping criteria): If the number of iterations exceeds c 2 = 3x 2 − x 1 − 10 (14)
the maximum then stop, else go to step 2.
⎧c1 if c1 ≤ 0
c1 = ⎨ (15)
3.3 The Proposed MOPSO Implementation ⎩0 if c1 > 0
The proposed MOPSO based approach was implemented using
FORTRAN language and the developed software program was ⎧c 2 if c 2 ≤ 0
executed on a 1.8-GHz Pentium 4 PC. Initially, several runs have c2 = ⎨ (16)
been done with different values of the PSO key parameters such ⎩0 if c 2 > 0
as the initial inertia weight and the maximum allowable velocity. -20 ≤ x1, x2 ≤ 20 (17)
Other parameters are selected as: number of particles n=50,
decrement constant α=0.99, c1=c2=2, and the search will be
terminated if the number of iterations reaches 1000. The Pareto fronts of the proposed MOPSO and SPEA are shown
To demonstrate the effectiveness of the proposed approach, in Fig. 2.
different cases with various objectives are considered in this
study. 4.3 Test Problem TP3
This test problem can be defined as follows [18].
4. RESULTS AND DISCUSSIONS
To compare the results and to assess the effectiveness of the
proposed approach to MOPSO, a group of benchmark test f 1 (x 1 ) = x 1 (18)
problems to compare the multiobjective techniques have been r
examined. The test for each function has been done for 100 runs. f 2 (x ) = g (x 2 ,....., x n ).h (f 1 , g ) (19)
For the first two test problems, TP1 and TP2, the SPEA [32] n
implemented in [33-34] has been employed for comparison g (x 2 ,....., x n ) = 1 + 9.(∑ x i ) /(n − 1) (20)
purposes. However, the results of SPEA for the last two test i =2
problems, TP3 and TP4, have been downloaded from [35].
f1
h (f 1 , g ) = 1 − (21)
4.1 Test Problem TP1 g
This test problem can be defined as follows [34]. r
x = (x 1 , x 2 ,....., x n ) (22)

x i ∈ [0,1], i = 1, 2,...30 (23)


⎧−x if x ≤ 1
⎪−2 + x if 1<x ≤ 3

f 1 (x ) = ⎨ (8) This test problem has a convex Pareto optimal front formed with
⎪4 − x if 3<x ≤ 4
⎪⎩−4 + x if x > 4
g (x 2 ,....., x n ) = 1 (24)
f 2 (x ) = (x − 5) 2 (9)
-100 ≤ x ≤ 100 (10)
The Pareto fronts of the proposed MOPSO and SPEA are shown
in Fig. 3.
This test problem has a discontinuous Pareto front. The Pareto
fronts of the proposed MOPSO and SPEA are shown in Fig. 1.
4.4 Test Problem TP4
This test problem can be defined as follows [18].
4.2 Test Problem TP2
This test problem can be defined as follows.
f 1 (x 1 ) = x 1 (25)

729
r
f 2 (x ) = g (x 2 ,....., x n ).h (f 1 , g ) (26) x i ∈ [0,1], i = 1, 2,...30 (30)
n
g (x 2 ,....., x n ) = 1 + 9.(∑ x i ) /(n − 1) (27) This test problem represents the discreteness feature. Its Pareto
i =2 optimal front consists of several non-contiguous formed with

f1 f1
h (f 1 , g ) = 1 − − sin(10π f 1 ) (28)
g g g (x 2 ,....., x n ) = 1 (31)
r
x = (x 1 , x 2 ,....., x n ) (29)

The Pareto fronts of the proposed MOPSO and SPEA are shown
16 in Fig. 4.
200
14

100
12

0
10

-100
F2

F2
-200
6

-300
4

-400
2

-500
0

-1.00 -0.75 -0.50 -0.25 0.00 0.25 0.50 0.75 1.00 -600
F1 1000 3000 5000 7000
0 2000 4000 6000 8000
(a) F1
16 (a)
200

14
100

12
0

10 -100
F2

8
F2

-200

6 -300

4 -400

2 -500

0 -600
-0.75 -0.25 0.25 0.75 0 1000 2000 3000 4000 5000 6000 7000 8000
-1.00 -0.50 0.00 0.50 1.00
F1 F1
(b) (b)

Figure 1: Pareto front of test problem 1 produced by Figure 3: Pareto front of test problem 2 produced by
(a) Proposed MOPSO (b) SPEA (a) Proposed MOPSO (b) SPEA

730
1.2
Proposed MOPSO 5. A COMPARATIVE STUDY
In this section, a comparative study for quality measures has been
SPEA [35]
carried out to assess the proposed MOPSO. Generally, the
1.0
definition of quality in the case of multiobjective optimization is
more complex than for single objective optimization problems
since the optimization goal itself consists of multiple objectives
0.8
[18, 21, 23]. Therefore, the above results have been compiled and
compared in view of the following objectives.
• The obtained nondominated set should be as close as
F2

0.6
possible to the Pareto-optimal front.
• The obtained nondominated solutions should be as diverse as
0.4 possible.
Pareto Front • The obtained nondominated solutions should have good
distribution over the nondominated front.
0.2
A performance measure of the extent of the nondominated
solutions is presented in [18]. The measure estimates the range to
which the fronts spread out. In other words, it measures the
0.0
normalized distance of the two outer nondominated solutions. The
0.1 0.3 0.5 0.7 0.9
0.0 0.2 0.4 0.6 0.8 1.0 average values of the normalized distance measure over 100
F1 different optimization runs are given in Table 1. The results show
that the proposed MOPSO has larger extent of the nondominated
Figure 3: Pareto front of test problem 3 solutions in all test problems considered except TP1 where SPEA
has similar result with respect to this measure.
On the other hand, the set coverage metric measure [23] for
comparing the performance of the proposed MOPSO and SPEA
1.2
Proposed MOPSO has been examined in this study. The average values of this
1.0 SPEA [35] measure over 100 different optimization runs are given in Table 2.
It can be shown that the performance of the proposed MOPSO is
0.8 comparable with that of SPEA in TP1 as a percent of 1% of the
nondominated solutions of each technique is covered by those of
0.6 the other. It can be also seen from Table 2 that the proposed
MOPSO has better performance in TP2. The better performance
0.4 of the proposed MOPSO is more pronounced in case of test
problems TP3 and TP4 where the nondominated solutions
F2

0.2
obtained by the proposed MOPSO completely cover those of
SPEA and none of SPEA covers any solution of the proposed
0.0
MOPSO. This confirms the fronts shown in Figs. 3 and 4.
-0.2 In this study, the quality measure presented in [34] is also
implemented. For each test problem, individual nondominated
-0.4
Pareto Front sets of the proposed MOPSO and SPEA are combined to form a
pool. Then, the dominance conditions are applied to all solutions
-0.6
in the pool. The nondominated solutions are extracted from the
-0.8 pool to form an elite set of Pareto-optimal solutions obtained by
0.1 0.3 0.5 0.7 0.9 both techniques. The average over 100 different runs for each test
0.0 0.2 0.4 0.6 0.8 1.0
F1 problem is given in Table 3. It can be observed that the proposed
MOPSO has a 100% share to the elite set while SPEA has no
Figure 4: Pareto front of test problem 4 contribution in test problems TP3 and TP4. The better
performance of MOPSO is observed also in case of TP2 while a
comparable performance has been experienced in TP1.
It can be shown from Figs. 1, 2, 3, and 4 that the proposed
MOPSO has satisfactory diversity and distribution characteristics
for these test problems. The superiority of the proposed MOPSO Table 1. Normalized distance measure
compared to SPEA is more pronounced in test problems TP3 and Test Problem TP1 TP2 TP3 TP4
TP4 as the proposed MOPSO captures the true Pareto optimal
Proposed MOPSO 1.412 1.409 0.965 0.976
front in these test problems with satisfactory diversity and
SPEA 1.402 1.400 0.961 0.987
distribution.

731
Table 2. Percentage of nondominated solutions of set B Conference on Evolutionary Programming, March 1998, pp.
covered by those in set A 591-600.
Set A Set B TP1 TP2 TP3 TP4 [5] E. Ozcan and C. Mohan, "Analysis of a Simple Particle
Proposed Swarm Optimization System," Intelligent Engineering
SPEA 1.0 12.0 100.0 100.0 Systems Through Artificial Neural Networks, Vol. 8, 1998,
MOPSO
Proposed pp. 253-258.
SPEA 1.0 6.0 0.0 0.0
MOPSO [6] J. Kennedy and R. Eberhart, Swarm Intelligence, Morgan
Kaufmann Publishers, 2001.
[7] M. A. Abido, “Optimal Design of Power System Stabilizers
Using Particle Swarm Optimization,” IEEE Trans. on Energy
Table 3. Average number of Pareto-optimal solutions in elite Conversion, Vol. 17, No. 3, September 2002, pp. 406-413.
set of nondominated solutions
[8] M. A. Abido, “Optimal Power Flow Using Particle Swarm
Test Problem TP1 TP2 TP3 TP4 Optimization” International Journal of Electrical Power and
Proposed MOPSO 99 94 100 100 Energy Systems, Vol. 24, No. 7, October 2002, pp. 563-571.
SPEA 99 88 0 0 [9] M. P. Wachowiak, R. Smolíková, Y. Zheng, J. M. Zurada,
Elite Set Size 198 182 100 100 and A. S. Elmaghraby, “An Approach to Multimodal
Biomedical Image Registration Utilizing Particle Swarm
Optimization,” IEEE Transactions on Evolutionary
6. ACKNOWLEDGMENT Computation, Vol. 8, No. 3, June 2004, pp. 289-301.
The author acknowledges the support and encouragement of King [10] M. F. Tasgetiren, M. Sevkli, Y. C. Liang, and G.
Fahd University of Petroleum & Minerals through Funded Project Gencyilmaz, “Particle Swarm Optimization Algorithm for
# FT/2006-13. Permutation Flowshop Sequencing Problem,” Proceedings of
the 4th International Workshop on Ant Colony Optimization
and Swarm Intelligence, ANTS2004, LNCS 3172 by
Springer-Verlag, Brussels, Belgium, September 5-8, 2004,
7. CONCULSIONS pp.382-390.
A new approach to multiobjective particle swarm optimization
technique is presented in this paper. The proposed approach is [11] S. Mishra, “A Hybrid Least Square-Fuzzy Bacterial Foraging
based on two-level, local and global, of nondominated solutions Strategy for Harmonic Estimation,” IEEE Transactions on
to select the local and global guides for each particle in the Evolutionary Computation, Vol. 9, No. 1, February 2005, pp.
swarm. The capability of the proposed MOPSO to obtain the true 61-73.
Pareto optimal solutions and capture the shape of the Pareto front [12] K. E. Parsopoulos and M. N. Vrahatis, “Particle Swarm
is evaluated and tested on well-known non-trivial test problems. Optimization Method in Multiobjective Problems,”
The diversity of the nondominated solutions obtained by the Proceedings of the ACM 2002 Symposium on Applied
proposed MOPSO is demonstrated through different measures. Computing (SAC’2002), 2002, pp. 603-607.
The proposed approach has been assessed through a comparative
study with the reported results in the literature. The results show [13] J. D. Schaffer, Multiobjective Optimization with Vector
the superiority of the proposed MOPSO approach in terms of Evaluated Genetic Algorithms, PhD Thesis, Vanderbilt
capturing the shape of the Pareto front and obtaining University, Nashville, USA, 1984.
nondominated solutions with satisfactory diversity characteristics [14] C. A. C. Coello, “A Comprehensive Survey of Evolutionary-
for the test problems considered. Based Multiobjective Optimization Techniques,” Knowledge
and Information Systems, Vol. 1, No. 3, 1999, pp. 269-308.
[15] X. Hu and R. Eberhart, “Multiobjective Optimization Using
Dynamic Neighborhood Particle Swarm Optimization,”
8. REFERENCES Congress on Evolutionary Computation (CEC'2002), Vol. 2,
[1] E. Zitzler, Evolutionary Algorithms for Multiobjective pp. 1677-1681, IEEE Service Center, Piscataway, New
Optimization: Methods and Applications, Ph.D. Thesis, Jersey, May 2002.
Swiss Federal Institute of Technology, Zurich, 1999.
[16] C. A. C. Coello and M. S. Lechuga, “MOPSO: A Proposal
[2] J. Kennedy, "The Particle Swarm: Social Adaptation of for Multiple Objective Particle Swarm Optimization,”
Knowledge," Proceedings of the 1997 IEEE international Congress on Evolutionary Computation (CEC'2002), Vol. 2,
Conference on Evolutionary Computation ICEC'97, pp. 1051-1056, IEEE Service Center, Piscataway, New
Indianapolis, Indiana, USA, 1997, pp. 303-308. Jersey, May 2002.
[3] P. Angeline, "Evolutionary Optimization versus Particle [17] J. E. Fieldsend and S. Singh, “A Multi-Objective Algorithm
Swarm Optimization: Philosophy and Performance based upon Particle Swarm Optimization, an Efficient Data
Differences, " Proceedings of the 7th Annual Conference on Structure and Turbulence,” Proceedings of the 2002 U.K.
Evolutionary Programming, March 1998, pp. 601-610. Workshop on Computational Intelligence, pp. 37-44,
[4] Y. Shi and R. Eberhart, "Parameter Selection in Particle Birmingham, UK, 2-4 September 2002.
Swarm Optimization," Proceedings of the 7th Annual

732
[18] E. Zitzler, K. Deb, and L. Thiele, “Comparison of Cybernetics, Shanghai, Chaina, 26-29 August 2004, pp.
Multiobjective Evolutionary Algorithms: Empirical Results,” 2236-2241.
Evolutionary Computation, Vol. 8, No. 2, 2000, pp. 173-195. [28] S. Mostaghim and J. Teich, “Covering Pareto-Optimal Fronts
[19] S. Mostaghim and J. Teich, “Strategies for Finding Good by Subswarms in Multiobjective Particle Swarm
Local Guides in Multiobjective Particle Swarm Optimization Optimization,” Proceedings of IEEE Congress on
(MOPSO),” Proceedings of 2003 IEEE Swarm Intelligence Evolutionary Computation CEC’2004, Portland, Oregon,
Symposium, Indianapolis, IN, USA, April 2003, pp. 26-33. USA, June 19-23, 2004, pp. 1404-1411.
[20] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving [29] Jürgen Branke and Sanaz Mostaghim, “About Selecting the
the Strength Pareto Evolutionary Algorithm,” Proceedings of Personal Best in Multi-Objective Particle Swarm
EUROGEN 2001, Athens, Greece, September 2001. Optimization,” Proceedings of the 9th International
[21] E. Zitzler, and L. Thiele, “Multiobjective Optimization Conference on Parallel Problem Solving from Nature - PPSN
Using Evolutionary Algorithms – A Comparative Case IX, Reykjavik, Iceland, September 9-13, 2006, pp. 523-532
Study,” Parallel Problem Solving from Nature V, pages 292- [30] M. Reyes-Sierra and C. A. C. Coello, “Multi-Objective
301, Amsterdam, September 1998, Springer-Verlag. Particle Swarm Optimizers: A Survey of the State-of-the-
[22] X. Hu, R. Eberhart, and Y. Shi, “Particle Swarm with Art,” International Journal of Computational Intelligence
Extended Memory for Multiobjective Optimization,” Research, Vol. 2, No. 3, 2006, pp. 287-308.
Proceedings of 2003 IEEE Swarm Intelligence Symposium, [31] J. N. Morse, “Reducing the Size of Nondominated Set:
Indianapolis, IN, USA, April 2003, pp. 193-197. Pruning by Clustering,” Computers and Operations
[23] E. Zitzler and L. Thiele, “Multiobjective Evolutionary Research, Vol. 7, No. 1-2, 1980, pp. 55-66.
Algorithms: A Comparative Case Study and the Strength [32] E. Zitzler and L. Thiele, “An Evolutionary Algorithm for
Pareto Approach,” IEEE Trans. on Evolutionary Multiobjective optimization: The Strength Pareto Approach,”
Computation, Vol. 3, No. 4, 1999, pp. 257-271. TIK-Report, No. 43, 1998.
[24] Xiaodong Li, “A Nondominated Sorting Particle Swarm [33] M. A. Abido, “Environmental/Economic Power Dispatch
Optimizer for Multiobjective Optimization,” Lecture Notes Using Multiobjective Evolutionary Algorithms,” IEEE
in Computer Science, Proceedings of Genetic and Trans. on Power Systems, Vol. 18, No. 4, November 2003,
Evolutionary Computation GECCO 2003, Vol. 2723, Part I, pp. 1529-1537.
Berlin, Germany, July 2003, pp. 37-48. [34] M. A. Abido, “Multiobjective Evolutionary Algorithms for
[25] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A Fast Electric Power Dispatch Problem” IEEE Trans. on
and Elitist Multiobjective Genetic Algorithms: NSGA-II,” Evolutionary Computations, Vol. 10, No. 3, June 2006, pp.
EEE Trans. on Evolutionary Computation, Vol. 6, No. 2, 315-329.
2002, pp. 182-197. [35] [Website] [http://www.tik.ee.ethz.ch/~zitzler/testdata.html]
[26] H. Lu, “Dynamic Population Strategy Assisted Particle [36] J. D. Schaffer, Multiple Objective Optimization with Vector
Swarm Optimization in Multiobjective Evolutionary Evaluated Genetic Algorithms, PhD Thesis, Vanderbilt
Algorithm design,” IEEE Neural Network Society, IEEE University, 1984.
NNS Student Research Grants 2002, Final reports 2003.
[37] K. Deb, “Multi-Objective Genetic Algorithms: Problem
[27] M. P. Song and G. C. Gu, “Research on Particle Swarm Difficulties and Construction of Test Problems,”
Optimization: A Review,” Proceedings of the 3rd Evolutionary Computation, Vol. 7, No. 3, 1999, pp. 205-230.
International Conference on Machine Learning and

733

You might also like