MS Thesis PSO Algorithm
MS Thesis PSO Algorithm
Satyobroto Talukder
Submitted to the School of Engineering at Blekinge Institute of Technology In partial fulfillment of the requirements for the degree of
Master of Science
February 2011
University advisor: Prof. Elisabeth Rakus-Andersson Department of Mathematics and Science, BTH
E-mail: [email protected] Phone: +46455385408
ABSTRACT
Optimization is a mathematical technique that concerns the finding of maxima or minima of functions in some feasible region. There is no business or industry which is not involved in solving optimization problems. A variety of optimization techniques compete for the best solution. Particle Swarm Optimization (PSO) is a relatively new, modern, and powerful method of optimization that has been empirically shown to perform well on many of these optimization problems. It is widely used to find the global optimum solution in a complex search space. This thesis aims at providing a review and discussion of the most established results on PSO algorithm as well as exposing the most active research topics that can give initiative for future work and help the practitioner improve better result with little effort. This paper introduces a theoretical idea and detailed explanation of the PSO algorithm, the advantages and disadvantages, the effects and judicious selection of the various parameters. Moreover, this thesis discusses a study of boundary conditions with the invisible wall technique, controlling the convergence behaviors of PSO, discrete-valued problems, multi-objective PSO, and applications of PSO. Finally, this paper presents some kinds of improved versions as well as recent progress in the development of the PSO, and the future research issues are also given. Keywords: Optimization, swarm intelligence, particle swarm, social network, convergence, stagnation, multi-objective.
CONTENTS
Page
Chapter 1- Introduction
8 9 9 10 Chapter 2- Background 11 2.1 Optimization... 11 2.1.1 Constrained Optimization 11 2.1.2 Unconstrained Optimization 12 2.1.3 Dynamic Optimization... 12 2.2 Global Optimization. 12 2.3 Local Optimization.. 12 2.4 Uniform Distribution 14 2.5 Sigmoid function. 15 Chapter 3- Basic Particle Swarm Optimization 16 3.1 The Basic Model of PSO algorithm.......................................................................... 16 3.1.1 Global Best PSO... 17 3.1.2 Local Best PSO. 19 3.2 Comparison of gbest to lbest 21 3.3 PSO Algorithm Parameters.. 21 3.3.1 Swarm size.. 21 3.3.2 Iteration numbers.. 21 3.3.3 Velocity components. 21 3.3.4 Acceleration coefficients 22 3.4 Geometrical illustration of PSO 23 3.5 Neighborhood Topologies... 24 3.6 Problem Formulation of PSO algorithm.. 26 3.7 Advantages and Disadvantages of PSO... 30 Chapter 4- Empirical Analysis of PSO Characteristics 31 4.1 Rate of Convergence Improvements.. 31 4.1.1 Velocity clamping. 31 4.1.2 Inertia weight... 33 4.1.3 Constriction Coefficient.. 34 4.2 Boundary Conditions.. 35 4.3 Guaranteed Convergence PSO (GCPSO) ... 37 4.4 Initialization, Stopping Criteria, Iteration Terms and Function Evaluation. 39 4.4.1 Initial Condition ... 39 4.4.2 Iteration Terms and Function Evaluation 40 4.4.3 Stopping Condition 40 Chapter 5- Recent Works and Advanced Topics of PSO 42 5.1 Multi-start PSO (MSPSO) .. 42 5.2 Multi-phase PSO (MPPSO) 45 5.3 Perturbed PSO (PPSO) .. 45 5.4 Multi-Objective PSO (MOPSO) .. 47 5.4.1 Dynamic Neighborhood PSO (DNPSO) 48 5.4.2 Multi-Objective PSO (MOPSO) .. 48 5.4.3 Vector Evaluated PSO (VEPSO) . 49 5.5 Binary PSO (BPSO) . 49 5.5.1 Problem Formulation (Lot Sizing Problem) . 51
1.1 PSO is a Member of Swarm Intelligence .. 1.2 Motivation.. 1.3 Research Questions .
ii
56 59 60 63 64
iii
List of Figures
Page
Figure 2.1: Illustrates the global minimizer and the local minimize. Figure 2.2: Sigmoid function.. Figure 3.1: Plot of the functions f1 and f2 ... Figure 3.2: Velocity and position update for a particle in a two-dimensional search space Figure 3.3: Velocity and Position update for Multi-particle in gbest PSO.. Figure 3.4: Velocity and Position update for Multi-particle in lbest PSO... Figure 3.5: Neighborhood topologies Figure 4.1: Illustration of effects of Velocity Clampnig for a particle in a two-dimensinal search space Figure 4.2: Various boundary conditions in PSO. Figure 4.3: Six different boundary conditions for a two-dimensional search space. x and v represent the modified position and velocity repectively, and r is a random factor [0,1]...
13 15 16 23 23 24 25 31 36 36
List of Flowcharts
Flowchart 1: gbest PSO............................................................................................ Flowchart 2: lbest PSO............................................................................................. Flowchart 3: Self-Organized Criticality PSO. Flowchart 4: Perturbed PSO...................................................................................... Flowchart 5: Binary PSO.......................................................................................... 19 20 44 46 50
iv
ACKNOWLEDGEMENT
Thanks to my supervisor Prof. Elisabeth Rakus-Andersson for her guidance and for helping me present my ideas clearly.
CHAPTER 1
Introduction
Scientists, engineers, economists, and managers always have to take many technological and managerial decisions at several times for construction and maintenance of any system. Day by day the world becomes more and more complex and competitive so the decision making must be taken in an optimal way. Therefore optimization is the main act of obtaining the best result under given situations. Optimization originated in the 1940s, when the British military faced the problem of allocating limited resources (for example fighter airplanes, submarines and so on) to several activities [6]. Over the decades, several researchers have generated different solutions to linear and non-liner optimization problems. Mathematically an optimization problem has a fitness function, describing the problem under a set of constraints which represents the solution space for the problem. However, most of the traditional optimization techniques have calculated the first derivatives to locate the optima on a given constrained surface. Due to the difficulties in evaluation the first derivative for many rough and discontinuous optimization spaces, several derivatives free optimization methods have been constructed in recent time [15]. There is no known single optimization method available for solving all optimization problems. A lot of optimization methods have been developed for solving different types of optimization problems in recent years. The modern optimization methods (sometimes called nontraditional optimization methods) are very powerful and popular methods for solving complex engineering problems. These methods are particle swarm optimization algorithm, neural networks, genetic algorithms, ant colony optimization, artificial immune systems, and fuzzy optimization [6] [7]. The Particle Swarm Optimization algorithm (abbreviated as PSO) is a novel population-based stochastic search algorithm and an alternative solution to the complex non-linear optimization problem. The PSO algorithm was first introduced by Dr. Kennedy and Dr. Eberhart in 1995 and its basic idea was originally inspired by simulation of the social behavior of animals such as bird flocking, fish schooling and so on. It is based on the natural process of group communication to share individual knowledge when a group of birds or insects search food or migrate and so forth in a searching space, although all birds or insects do not know where the best position is. But from the nature of the social behavior, if any member can find out a desirable path to go, the rest of the members will follow quickly. The PSO algorithm basically learned from animals activity or behavior to solve optimization problems. In PSO, each member of the population is called a particle and the population is called a swarm. Starting with a randomly initialized population and moving in randomly chosen directions, each particle goes through 2
the searching space and remembers the best previous positions of itself and its neighbors. Particles of a swarm communicate good positions to each other as well as dynamically adjust their own position and velocity derived from the best position of all particles. The next step begins when all particles have been moved. Finally, all particles tend to fly towards better and better positions over the searching process until the swarm move to close to an optimum of the fitness function The PSO method is becoming very popular because of its simplicity of implementation as well as ability to swiftly converge to a good solution. It does not require any gradient information of the function to be optimized and uses only primitive mathematical operators. As compared with other optimization methods, it is faster, cheaper and more efficient. In addition, there are few parameters to adjust in PSO. Thats why PSO is an ideal optimization problem solver in optimization problems. PSO is well suited to solve the non-linear, non-convex, continuous, discrete, integer variable type problems.
1.2 Motivation
PSO method was first introduced in 1995. Since then, it has been used as a robust method to solve optimization problems in a wide variety of applications. On the other hand, the PSO method does not always work well and still has room for improvement. This thesis discusses a conceptual overview of the PSO algorithm and a number of modifications of the basic PSO. Besides, it describes different types of PSO algorithms and flowcharts, recent works, advanced topics, and application areas of PSO.
Q.1 is illustrated in Section 4.1 and 4.3; Q.2 in Section 5.1; Q.3 (a) in Section 4.1.1; Q.3 (b) and (c) in Section 3.3.5; Q.4 and Q.5 in Section 4.2 and 5.5 respectively.
CHAPTER 2
Background
This chapter reviews some of the basic definitions related to this thesis.
2.1 Optimization
Optimization determines the best-suited solution to a problem under given circumstances. For example, a manager needs to take many technological and managerial plans at several times. The final goal of the plans is either to minimize the effort required or to maximize the desired benefit. Optimization refers to both minimization and maximization tasks. Since the maximization of any function is mathematically equivalent to the minimization of its additive inverse , the term minimization and optimization are used interchangeably [6]. For this reason, nowa-days, it is very important in many professions. Optimization problems may be linear (called linear optimization problems) or nonlinear (called non-linear optimization problems). Non-linear optimization problems are generally very difficult to solve. Based on the problem characteristics, optimization problems are classified in the following:
where respectively.
, with
(2.2)
where is the dimension of .
where and
(2.3) is a vector of time-dependent objective function control parameters, is the optimum found at time step .
There are two techniques to solve optimization problems: Global and Local optimization techniques.
Here, the term global minimum refers to the value , and is called the global minimizer. Some global optimization methods require a starting point and it will be able to find the global minimizer if .
(2.5) where Here, a local optimization method should guarantee that a local minimizer of the set is found.
Finally, local optimization techniques try to find a local minimum and its corresponding local minimizer, whereas global optimization techniques seek to find a global minimum or lowest function value and its corresponding global minimizer.
, and then the following figure 2.1.1 illustrates the difference between the global minimizer and the local minimizer .
-10
-20
-30
* L
-40
y
-50 -60
-70
x
0 1 2 3 x 4 5
-80 -1
Figure 2.1 : Illustration of the local minimizer xL* and the global minimizer x*.
A uniform distribution
A nonuniform distribution
The probability density function (PDF) and cumulative distribution function (CDF) for a continuous uniform distribution on the interval are respectively
(2.6)
and
(2.7)
f ( x) 1 /(b a)
F ( x)
1
a
Uniform PDF
a
Uniform CDF
(2.9)
-8
-6
-4
-2
10
CHAPTER 3
Basic Particle Swarm Optimization
This chapter discusses a conceptual overview of the PSO algorithm and its parameters selection strategies, geometrical illustration and neighborhood topology, advantages and disadvantages of PSO, and mathematical explanation.
and
6 4 2
4 0 2 -2 -4 2 2 0 -2 0 -2 0 -2 0 -2 2
0 2
(b) Multi-model
From the figure 3.1 (a), it is clear that the global minimum of the function is at , i.e. at the origin of function in the search space. That means it is a unimodel function, which has only one minimum. However, to find the global optimum is not so easy for multi-model functions, which have multiple local minima. Figure 3.1 (b) shows the function which has a rough search space with multiple peaks, so many agents have to start from different initial locations and
10
continue exploring the search space until at least one agent reach the global optimal position. During this process all agents can communicate and share their information among themselves [15]. This thesis discusses how to solve the multimodel function problems. The Particle Swarm Optimization (PSO) algorithm is a multi-agent parallel search technique which maintains a swarm of particles and each particle represents a potential solution in the swarm. All particles fly through a multidimensional search space where each particle is adjusting its position according to its own experience and that of neighbors. Suppose denote the position vector of particle in the multidimensional search space (i.e. ) at time step , then the position of each particle is updated in the search space by with where, is the velocity vector of particle that drives the optimization process and reflects both the own experience knowledge and the social experience knowledge from the all particles; is the uniform distribution where minimum and maximum values respectively. are its (3.4)
Therefore, in a PSO method, all particles are initiated randomly and evaluated to compute fitness together with finding the personal best (best value of each particle) and global best (best value of particle in the entire swarm). After that a loop starts to find an optimum solution. In the loop, first the particles velocity is updated by the personal and global bests, and then each particles position is updated by the current velocity. The loop is ended with a stopping criterion predetermined in advance [22]. Basically, two PSO algorithms, namely the Global Best (gbest) and Local Best (lbest) PSO, have been developed which differ in the size of their neighborhoods. These algorithms are discussed in Sections 3.1.1 and 3.1.2 respectively.
11
by [20]. The following equations (3.5) and (3.6) define how the personal and global best values are updated, respectively. Considering minimization problems, then the personal best position next time step, , is calculated as at the
(3.5) where is the fitness function. The global best position step is calculated as , at time
(3.6)
Therefore it is important to note that the personal best is the best position that the individual particle has visited since the first time step. On the other hand, the global best position is the best position discovered by any of the particles in the entire swarm [4]. For gbest PSO method, the velocity of particle is calculated by (3.7) where is the velocity vector of particle in dimension is the position vector of particle in dimension at time at time ; ;
and and
is the personal best position of particle in dimension found from initialization through time t; is the global best position of particle in dimension found from initialization through time t; are positive acceleration constants which are used to level the contribution of the cognitive and social components respectively; are random numbers from uniform distribution at time t.
12
t=0 Choose randomly rt1j, rt2j i=1 j=1 vijt+1=vijt+c1rt1j[Ptbest,i-xijt]+c2rt2j[Gbest-xijt] j = j+1 i = i+1 t = t+1 Yes No Yes xijt+1=xijt+vijt+1
j<D
i<P
Evaluate fijt using xijt Yes
13
where,
is the best position that any particle has had in the neighborhood of particle found from initialization through time t.
t=0 Choose randomly rt1j,rt2j i=1 j=1 vijt+1=vijt+c1rt1j[Pbest,it-xijt]+c2rt2j[Lbest,i-xijt] j = j+1 i = i+1 Yes No Yes t = t+1 xijt+1=xijt+vijt+1
j<D
i<P
Evaluate fijt using xijt Yes fijt f best,i No Yes
( f tbest,i-1, ft best,i , ft best,i+1) f lbest
Finally, we can say from the Section 3.1.1 and 3.1.2 respectively, in the gbest PSO algorithm every particle obtains the information from the best particle in the entire swarm, whereas in the lbest PSO algorithm each particle obtains the information from only its immediate neighbors in the swarm [1].
14
15
2. The term is called cognitive component which measures the performance of the particles relative to past performances. This component looks like an individual memory of the position that was the best for the particle. The effect of the cognitive component represents the tendency of individuals to return to positions that satisfied them most in the past. The cognitive component referred to as the nostalgia of the particle. 3. The term for gbest PSO or for lbest PSO is called social component which measures the performance of the particles relative to a group of particles or neighbors. The social components effect is that each particle flies towards the best position found by the particles neighborhood.
When , each particle is more strongly influenced by its personal best position, resulting in excessive wandering. In contrast, when then all particles are much more influenced by the global best position, which causes all particles to run prematurely to the optima [4] [11].
16
Normally, are static, with their optimized values being found empirically. Wrong initialization of may result in divergent or cyclic behavior [4]. From the different empirical researches, it has been proposed that the two acceleration constants should be
x2
Cognitive velocity, Ptbest,i-xit
x2
xit+2
xit xit+1
vit+1
x1
x1
Figure 3.2: velocity and position update for a particle in a two-dimensional search space.
Figure 3.2 illustrates how the three velocity components contribute to move the particle towards the global best position at time steps and respectively.
x2 x2
Gbest
x1
Gbest
x1
(a) at time t = 0
(b) at time t = 1
Figure 3.3: Velocity and Position update for Multi-particle in gbest PSO.
Figure 3.3 shows the position updates for more than one particle in a two dimensional search space and this figure illustrates the gbest PSO. The optimum position is denoted by the symbol . Figure 3.3 (a) shows the initial position of all particles with the global best position. The cognitive component is zero at and all particles are only attracted toward the best position by the social component. Here the global best position does not change. Figure 3.3 (b) shows
17
the new positions of all particles and a new global best position after the first iteration i.e. at .
x2 e d
Lbest
b a f 1 2 x1
Lbest
x2 c d e
Lbest
b 1
Lbest
g
Lbest Lbest
x1 3
i 3 h j
(b) at time t = 1 (a) at time t = 0 Figure 3.4: Velocity and Position update for Multi-particle in lbest PSO.
Figure 3.4 illustrates how all particles are attracted by their immediate neighbors in the search space using lbest PSO and there are some subsets of particles where one subset of particles is defined for each particle from which the local best particle is then selected. Figure 3.4 (a) shows particles a, b and c move towards particle d, which is the best position in subset 1. In subset 2, particles e and f move towards particle g. Similarly, particle h moves towards particle i, so does j in subset 3 at time step . Figure 3.4 (b) for time step , the particle d is the best position for subset 1 so the particles a, b and c move towards d.
Focal particle
Figure 3.5 (a) illustrates the star topology, where each particle connects with every other particle. This topology leads to faster convergence than other topologies, but there is a susceptibility to be trapped in local minima. Because all particles know each other, this topology is referred to as the gbest PSO. Figure 3.5 (b) illustrates the ring topology, where each particle is connected only with its immediate neighbors. In this process, when one particle finds a better result, this particle passes it to its immediate neighbors, and these two immediate neighbors pass it to their immediate neighbors, until it reaches the last particle. Thus the best result found is spread very slowly around the ring by all particles. Convergence is slower, but larger parts of the search space are covered than with the star topology. It is referred as the lbest PSO. Figure 3.5 (c) illustrates the wheel topology, in which only one particle (a focal particle) connects to the others, and all information is communicated through this particle. This focal particle compares the best performance of all particles in the swarm, and adjusts its position towards the best performance particle. Then the new position of the focal particle is informed to all the particles. Figure 3.5 (d) illustrates a four clusters topology, where four clusters (or cliques) are connected with two edges between neighboring clusters and one edge between opposite clusters.
19
There are more different neighborhood structures or topologies (for instance, pyramid topology, the Von Neumann topology and so on), but there is no the best topology known to find the optimum for all kinds of optimization problems.
3.6 Problem Formulation of PSO algorithm Problem: Find the maximum of the function
with using the PSO algorithm. Use 9 particles with the initial positions , , , , and . Show the detailed computations for iterations 1, 2 and 3.
Solution:
Step1: Choose the number of particles: , The initial population (i.e. the iteration number as , , and , . ) can be represented ,
, , ,
, , , .
Step2: Set the iteration number as Step3: Find the personal best for each particle by
and go to step 3.
20
So,
,
Step4: Find the global best by
thus and
Step5: Considering the random numbers in the range (0, 1) as and find the velocities of the particles by
so
, ,
Step6: Find the new values of by
, .
So
, , , ,
, , , .
.
Step 8: Stopping criterion: If the terminal rule is satisfied, go to step 2, Otherwise stop the iteration and output the results.
21
Step2: Set the iteration number as Step3: Find the personal best for each particle.
, and go to step 3.
.
Step4: Find the global best.
Step5: By considering the random numbers in the range (0, 1) as and find the velocities of the particles by . so
, ,
Step6: Find the new values of
, ,
by
so
, , 1.9240,
Step7: Find the objective function values of
, .
Step 8: Stopping criterion: If the terminal rule is satisfied, go to step 2, Otherwise stop the iteration and output the results.
22
Step2: Set the iteration number as Step3: Find the personal best for each particle.
, and go to step 3.
.
Step4: Find the global best.
Step5: By considering the random numbers in the range (0, 1) as and find the velocities of the particles by . so
, ,
Step6: Find the new values of
, ,
by
so
, , ,
Step7: Find the objective function values of
, .
Step 8: Stopping criterion: If the terminal rule is satisfied, go to step 2, Otherwise stop the iteration and output the results.
23
Finally, the values of did not converge, so we increment the iteration number as and go to step 2. When the positions of all particles converge to similar values, then the method has converged and the corresponding value of is the optimum solution. Therefore the iterative process is continued until all particles meet a single value.
24
CHAPTER 4
Empirical Analysis of PSO Characteristics
This chapter discusses a number of modifications of the basic PSO, how to improve speed of convergence, to control the exploration-exploitation trade-off, to overcome the stagnation problem or the premature convergence, the velocityclamping technique, the boundary value problems technique, the initial and stopping conditions, which are very important in the PSO algorithm.
x1
Figure 4.1: Illustration of effects of Velocity Clampnig for a particle in a two-dimensinal search space.
25
Figure 4.1 illustrates how velocity clamping changes the step size as well as the search direction when a particle moves in the process. In this figure, and denote respectively the position of particle i without using velocity clamping and the result of velocity clamping [4]. Now if a particles velocity goes beyond its specified maximum velocity , this velocity is set to the value and then adjusted before the position update by, (4.1) where, is calculated using equation (3.7) or (3.8).
If the maximum velocity is too large, then the particles may move erratically and jump over the optimal solution. On the other hand, if is too small, the particles movement is limited and the swarm may not explore sufficiently or the swarm may become trapped in a local optimum. This problem can be solved when the maximum velocity is calculated by a fraction of the domain of the search space on each dimension by subtracting the lower bound from the upper bound, and is defined as (4.2) where, are respectively the maximum and minimum values of and . For example, if and on each dimension of the search space, then the range of the search space is 300 per dimension and velocities are then clamped to a percentage of that range according to equation (4.2), then the maximum velocity is There is another problem when all velocities are equal to the maximum velocity . To solve this problem can be reduced over time. The initial step starts with large values of , and then it is decreased it over time. The advantage of velocity clamping is that it controls the explosion of velocity in the searching space. On the other hand, the disadvantage is that the best value of should be chosen for each different optimization problem using empirical techniques [4] and finding the accurate value for for the problem being solved is very critical and not simple, as a poorly chosen can lead to extremely poor performance [1]. Finally, was first introduced to prevent explosion and divergence. However, it has become unnecessary for convergence because of the use of inertia-weight (Section 4.1.2) and constriction factor (Section 4.1.3) [15].
26
and
27
Van den Bergh and Engelbrecht, Trelea have defined a condition that (4.5) guarantees convergence [4]. Divergent or cyclic behavior can occur in the process if this condition is not satisfied. Shi and Eberhart defined a technique for adapting the inertia weight dynamically using a fuzzy system [11]. The fuzzy system is a process that can be used to convert a linguistic description of a problem into a model in order to predict a numeric variable, given two inputs (one is the fitness of the global best position and the other is the current value of the inertia weight). The authors chose to use three fuzzy membership functions, corresponding to three fuzzy sets, namely low, medium, and high that the input variables can belong to. The output of the fuzzy system represents the suggested change in the value of the inertia weight [4] [11]. The fuzzy inertia weight method has a greater advantage on the unimodal function. In this method, an optimal inertia weight can be determined at each time step. When a function has multiple local minima, it is more difficult to find an optimal inertia weight [11]. The inertia weight technique is very useful to ensure convergence. However there is a disadvantage of this method is that once the inertia weight is decreased, it cannot increase if the swarm needs to search new areas. This method is not able to recover its exploration mode [16].
If then all particles would slowly spiral toward and around the best solution in the searching space without convergence guarantee. If then all particles converge quickly and guaranteed [1]. The amplitude of the particles oscillation will be decreased by using the constriction coefficient and it focuses on the local and neighborhood previous best points [7] [15]. If the particles previous best position and the neighborhood best position are near each other, then the particles will perform a local search. On the other hand, if their positions are far from each other then the particles will perform
28
a global search. The constriction coefficient guarantees convergence of the particles over time and also prevents collapse [15]. Eberhart and Shi empirically illustrated that if constriction coefficient and velocity clamping are used together, then faster convergence rate will be obtained [4]. The disadvantage of the constriction coefficient is that if a particles personal best position and the neighborhood best position are far apart from each other, the particles may follow wider cycles and not converge [16]. Finally, a PSO algorithm with constriction coefficient is algebraically equivalent to a PSO algorithm with inertia weight. Equation (4.3) and (4.6) can be transformed into one another by the mapping and [19].
29
No
No
At the boundary
Absorbing v= 0
Reflecting v= -v
Invisible v= v
Invisible/reflecting v= -v
The following Figure 4.3 shows how the position and velocity of errant particle is treated by boundary conditions.
y xt vt = vx.x+vy.y xt+1 xt+1 vt = 0.x+vy.y xt+1 y xt vt = vx.x+vy.y xt+1 vt = -vx.x+vy.y y xt vt = vx.x+vy.y xt+1 vt = -r.vx.x+vy.y
xt+1
x y
x y
x (d) Invisible
x (e) Invisible/Reflecting
x (f) Invisible/Damping
Figure 4.3: Six different boundary conditions for a two-dimensional search space. x and v represent the modified position and velocity repectively, and r is a random factor [0,1].
The six boundary conditions are discussed below [17]: Absorbing boundary condition (ABC): When a particle goes outside the solution space in one of the dimensions, the particle is relocated at the wall of the solution space and the velocity of the particle is set to zero in that dimension as illustrated in Figure 4.3(a). This means that, in this condition, such kinetic energy of the particle is absorbed by a soft wall so that the particle will return to the solution space to find the optimum solution. Reflecting boundary condition (RBC): When a particle goes outside the solution space in one of the dimensions, then the particle is relocated at the wall of
30
the solution space and the sign of the velocity of the particle is changed in the opposite direction in that dimension as illustrated in Figure 4.3(b). This means that, the particle is reflected by a hard wall and then it will move back toward the solution space to find the optimum solution. Damping boundary condition (DBC): When a particle goes outside the solution space in one of the dimensions, then the particle is relocated at the wall of the solution space and the sign of the velocity of the particle is changed in the opposite direction in that dimension with a random coefficient between 0 and 1 as illustrated in Figure 4.3(c). Thus the damping boundary condition acts very similar as the reflecting boundary condition except randomly determined part of energy is lost because of the imperfect reflection. Invisible boundary condition (IBC): In this condition, a particle is considered to stay outside the solution space, while the fitness evaluation of that position is skipped and a bad fitness value is assigned to it as illustrated in Figure 4.3(d). Thus the attraction of personal and global best positions will counteract the particles momentum, and ultimately pull it back inside the solution space. Invisible/Reflecting boundary condition (I/RBC): In this condition, a particle is considered to stay outside the solution space, while the fitness evaluation of that position is skipped and a bad fitness value is assigned to it as illustrated in Figure 4.3(e). Also, the sign of the velocity of the particle is changed in the opposite direction in that dimension so that the momentum of the particle is reversed to accelerate it back toward in the solution space. Invisible/Damping boundary condition (I/DBC): In this condition, a particle is considered to stay outside the solution space, while the fitness evaluation of that position is skipped and a bad fitness value is assigned to it as illustrated in Figure 4.3(f). Also, the velocity of the particle is changed in the opposite direction with a random coefficient between 0 and 1 in that dimension so that the reversed momentum of the particle which accelerates it back toward in the solution space is damped.
31
To solve this problem a new parameter is introduced to the PSO. Let index of the global best particle, so that
be the
(4.7) A new velocity update equation for the globally best positioned particle, , has been suggested in order to keep moving until it has reached a local minimum. The suggested equation is (4.8) where is a scaling factor and causes the PSO to perform a random search in an area surrounding the global best position . It is defined in equation (4.10) below, resets the particles position to the position , lengths represents the current search direction, generates a random sample from a sample space with side .
Combining the position update equation (3.4) and the new velocity update equation (4.8) for the global best particle yields the new position update equation (4.9) while all other particles in the swarm continue using the usual velocity update equation (4.3) and the position update equation (3.4) respectively. The parameter controls the diameter of the search space and the value of adapted after each time step, using is
(4.10)
where and respectively denote the number of consecutive successes and failures, and a failure is defined as . The following conditions must also be implemented to ensure that equation (4.10) is well defined:
and (4.11) Therefore, when a success occurs, the failure count is set to zero and similarly when a failure occurs, then the success count is reset.
32
The optimal choice of values for and depend on the objective function. It is difficult to get better results using a random search in only a few iterations for high- dimensional search spaces, and it is recommended to use and . On the other hand, the optimal values for and can be found dynamically. For instance, may be increased every time that i.e. it becomes more difficult to get the success if failures occur frequently which prevents the value of from fluctuating rapidly. Such strategy can be used also for [11]. GCPSO uses an adaptive to obtain the optimal of the sampling volume given the current state of the algorithm. If a specific value of repeatedly results in a success, then a large sampling volume is selected to increase the maximum distance traveled in one step. On the other hand, when produces consecutive failures, then the sampling volume is too large and must be consequently reduced. Finally, stagnation is totally prevented if for all steps [4].
4.4.1 Initialization
In PSO algorithm, initialization of the swarm is very important because proper initialization may control the exploration and exploitation tradeoff in the search space more efficiently and find the better result. Usually, a uniform distribution over the search space is used for initialization of the swarm. The initial diversity of the swarm is important for the PSOs performance, it denotes that how much of the search space is covered and how well particles are distributed. Moreover, when the initial swarm does not cover the entire search space, the PSO algorithm will have difficultly to find the optimum if the optimum is located outside the covered area. Then, the PSO will only discover the optimum if a particles momentum carries the particle into the uncovered area. Therefore, the optimal initial distribution is to located within the domain defined by which represent the minimum and maximum ranges of for all particles in dimension respectively [4]. Then the initialization method for the position of each particle is given by (4.12) where
33
The velocities of the particles can be initialized to zero, i.e. since randomly initialized particles positions already ensure random positions and moving directions. In addition, particles may be initialized with nonzero velocities, but it must be done with care and such velocities should not be too large. In general, large velocity has large momentum and consequently large position update. Therefore, such large initial position updates can cause particles to move away from boundaries in the feasible region, and the algorithm needs to take more iterations before settling the best solution [4].
2) The algorithm is terminated when there is no significant improvement over a number of iterations. This improvement can be measured in different ways. For instance, the process may be considered to have terminated if the average change of the particles positions are very small or the average velocity of the particles is approximately zero over a number of iterations [4]. 3) The algorithm is terminated when the normalized swarm radius is approximately zero. The normal swarm radius is defined as (4.13) where diameter(S) is the initial swarms diameter and maximum radius, is the
34
The process will terminate when . If is too large, the process can be terminated prematurely before a good solution has been reached while if is too small, the process may need more iterations [4].
35
CHAPTER 5
Recent Works and Advanced Topics of PSO
This chapter describes different types of PSO methods which help to solve different types of optimization problems such as Multi-start (or restart) PSO for when and how to reinitialize particles, binary PSO (BPSO) method for solving discrete-valued problems, Multi-phase PSO (MPPSO) method for partition the main swarm of particles into sub-swarms or subgroups, Multi-objective PSO for solving multiple objective problems.
36
A probabilistic technique has been discussed to decide when to reinitialize particles. X. Xiao, W. Zhang, and Z. Yang reinitialize velocities and positions of particles based on chaos factors which act as probabilities of introducing chaos in the system. Let denote the chaos factors for velocity and location. If then the particle velocity component is reinitialized to where is random number for each particle and each dimension . Again, if then the particle position component is initialized to . In this technique, start with large chaos factors that decrease over time to ensure that an equilibrium stat can be reached. Therefore the initial large chaos factors increase diversity in the first stages of the solution space, and allow particles to converge in the final steps [4]. A convergence criterion is another technique to decide when to reinitialize particles, where particles are allowed to first exploit their local regions before being reinitialized [4]. All particles are to initiate reinitialization when particles do not improve over time. In this technique, a variation is to evaluate in particle fitness of the current swarm, and if the variation is small, then particles are close to the global best position. Otherwise, particles that are at least two standard deviations away from the swarm center are reinitialized. M. Lvberg and T. Krink have developed reinitialization of particles by using selforganized criticality (SOC) which can help control the PSO and add diversity [21]. In SOC, each particle maintains an additional variable, , where is the criticality of the particle . If two particles are closer than a threshold distance , from one another, then both particles have their criticality increased by one. The particles have no neighborhood restrictions and this neighborhood is full connected network (i.e. star type) so that each particle can affect all other particles [21]. In SOCPSO model the velocity of each particle is updated by (5.1) where is known as the constriction factor, is the inertia-weight, and random values different for each particle and for each dimension [21]. are
In each iteration, each is decreased by a fraction to prevent criticality from building up [21]. When , is the global criticality limit, then the criticality of the particle is distributed to its immediate neighbors and is reinitialized. The authors also consider the inertia weight value of each particle to , this forces the particle to explore more when it is too similar to other particles [4].
37
t=0 i=1 j=1 vijt+1=[vijt+1(Pbest,it-xijt)+2(Gbest-xijt)] xijt+1=xijt+vijt+1 j = j+1 Calculate criticality for all particles Reduce criticality for each particle No
i = i+1 t = t+1
Ci > C Yes
j<D
i<P
Evaluate fijt using xijt Yes
38
In this method, the personal best position is eliminated from the main velocity equation (4.3), since a particles position is only updated when the new position improves the performance in the solution space [4] [23]. Another MPPSO algorithm is based on the groups PSO and multi-start PSO algorithm, and it was introduced by H. Qi et al [23]. The advantage of the MPPSO algorithm is that when the fitness of a particle doesnt changed any more, then the particles flying speed and direction in the searching space are changed by the adaptive velocity strategy. Therefore, MPPSO differ from basic PSO in three ways: 1. Particles divide into multiple groups to increase the diversity of the swarm and extensiveness of the exploration space. 2. Different phases introduce in the algorithm which have different searching ways and flying directions. 3. Searching direction will increase particles fitness [23].
perturbed particle swarm algorithm which is based upon a new particle updating strategy and the concept of perturbed global best (p-gbest) within the swarm. The perturbed global best (p-gbest) updating strategy is based on the concept of possibility measure to model the lack of information about the true optimality of the gbest [24]. In PPSO, the particle velocity is rewritten by (5.3) where (5.4) is the -th dimension of p-gbest in iteration . Here, is the normal distribution, and represents the degree of uncertainty about the optimality of the gbest and is modeled as some nonincreasing function of the number of iterations, defined as (5.5) where and are manually set parameters.
t=0 Choose randomly rt1j,rt2j i=1 j=1 Calculate using equation (5.5) j = j+1 i = i+1 t = t+1 Calculate Gbest using equation (5.4) vijt+1=vijt+c1rt1j[Pbest,it-xijt]+c2rt2j[Gbest-xijt] xijt+1=xijt+vijt+1 Yes No Yes
j<D
i<P
Evaluate fijt using xijt Yes
best,i
best,i
Yes
gbest
gbest
40
The p-gbest function encourages the particles to explore a solution space beyond that defined by the search trajectory. If is large, then the p-gbest algorithm generates a simple and efficient exploration at the initial stage, while it encourages a local fine-tuning at the latter stage when is small. Moreover, the p-gbest function reduces the likelihood of premature convergence and also helps to direct the search toward the most promising search area [24].
is an ndimensional search space, and is -objective functions defined over . Then, a general multi-objective minimization optimization problem can be expressed as: (5.6) subject to: (5.7) (5.8) where is the decision-making vector on the search space; is the goal vector ; and are the constraint functions(or bound conditions) of the problem. The objective functions can be conflicting with each other so that the detection of a single global minimum cannot possibly be at the same point in . To solve this problem, optimality of a solution in multiobjective problems needs to be redefined properly. Let space . be two vectors of the search Therefore, (denoted by ) if and only if ( ) for all , and for at least one component. This property is called Pareto dominance. Now a solution, , of the multi-objective problem is said to be Pareto optimal, if and only if there is no other solution, , in such that dominates , that means is not dominated such that . The set of non-dominated (or all Pareto optimal) solutions of a problem in the solution space is called the Pareto optimal set, denoted by , and the set (5.9) is called the Pareto front [26].
41
In multi-objective optimization algorithms, these cases are considered the most difficult. From the definition of Pareto optimality, it is true that the main goal in multi-objective optimization problems is the detection of all Pareto optimal solutions. Since the Pareto optimal set may be infinite and all the computational problems are time and space limited, we are compelled to set more realistic goals [26]. A number of approaches have been proposed to extend the PSO for multiple objective problems. Some approaches will be discussed in this section.
42
is the velocity of the -th particle in the -th swarm; is the best position found for any particle in the -th swarm which is evaluated with the -th objective function. The VEPSO algorithm is called parallel VEPSO because this algorithm also enables the swarms to be implemented in parallel computers that are connected in an Ethernet network [16]. In 2005, Raquel and Naval first introduced Multi-Objective PSO with Crowing Distance (MOPSO-CD). This algorithm is based on a crowding distance mechanism for the selection of the global best particle and also for the deletion of non-dominated solutions from the external archive. The MOPSOCD method has a constraint handling technique for solving constrained optimization problems.
43
position takes a value of 0 or 1. The update equation for the velocity does not change from that used in the original PSO and the equation (3.7),
Now, the
bit of the
particle,
is updated by (5.11)
where,
is a random number selected from a uniform distribution in (0, 1), and is the sigmoid function , denoted by, (5.12)
s
i = i+1 j = j+1 t = t+1 xijt+1=1 Yes Yes
t ij
1 1
v ij
t 1
utij<stij
No
xijt+1=0
j<D
No Yes
i<P
Evaluate fijt using xijt Yes
best,i
best,i
Yes
gbest
gbest
44
Now if the bit is not flipped into a 1 at the iteration, then the velocity increases again at the iteration, along with a greater probability of to flip the bit. In continuous valued problems of the PSO, the maximum velocity can be large number for the particles exploration. On the other hand, in binary PSO the maximum velocity will be small numbers for exploration, even if a good solution is found [12]. It is suggested that the maximum velocity , which corresponds to a maximum probability of 0.997 that a bit is flipped into 1, whereas the minimum velocity , corresponds to a minimum probability of 0.002 that the bit remains 0. Since each bit of is always binary-valued in the solution space, so no boundary conditions need to be specified in BPSO [10]. The velocity instance, if if , then is a probability for the particle position , then , and if , then to be 0 or 1. For .
(or 50%). On the other hand, . Due to the random number can change even if the value of does not
The binary PSO algorithm is very important to practical and commercial use in discrete problems solving, therefore this algorithm completely needs a lot more attention in the future. Finally, in the following section the Lot sizing problem is used to illustrate the details of computations for iterations 1 and 2 using the BPSO method.
45
The objective function equation (1) is to minimize the total cost with constraints that include some limits, no initial inventory is available equation (2),equation (3) represents the inventory balance equation in which the order quantity covers all the requirements until the next order, equation (4) shows that projected inventory is always positive, equation (5) satisfies the condition that no shortages are allowed, and finally equation (6) denotes the decision variable that is either 1 (place an order) or 0 (not place an order) [22].
Solution:
The various steps of the procedure are illustrated using the binary particle swarm optimization: Step1: Consider the number of particles ; the number of dimensions or periods ; the ordering cost per period ; the holding cost per unit per period ; also the net requirements , the lot size of the particles , the initial particles positions and corresponding velocities , the inventory balance of particles are given below and finally evaluate each particle in the swarm using the objective function in period at iteration . 1 80 140 1 3.50 60 260 120 1 2.80 40 240 100 1 3.40 20 220 + + 80 1 3.00 20 220 70 0 -2.50 30 0 -1.00 25 1 1.50 15 + 215 + = 460 95 1 3.00 25 225 =660 2 60 100 0 1.50 3 40 70 1 3.80 30 230 50 0 -1.20 40 0 -3.00 = 490 90 0 2.5 4 30 100 0 -2.20 5 70 80 0 -1.00
46
So
, , ,
, , ,
So , , , , , , , , , , , , .
47
Where,
The following table is given the particles update position after completing the first iteration:
1 1 3.50 0.96 0.60 120 0 2.80 0.94 0.96 1 3.15 0.96 0.56 40
Step 6: Evaluate using
in the swarm:
Step 7: Update the personal best for each particle in the swarm: Since then then then with with
. .
with
48
Since
1 0 2 1 3 0
then
4 0 5 1
with
460
Step 9: Stopping criterion: If the terminal rule is satisfied, go to step 2, Otherwise stop the iteration and output the results.
49
CHAPTER 6
Applications of PSO
This chapter discusses the various application areas of PSO method. Kennedy and Eberhart established the first practical application of Particle Swarm Optimization in 1995. It was in the field of neural network training and was reported together with the algorithm itself. PSO have been successfully used across a wide range of applications, for instance, telecommunications, system control, data mining, power systems, design, combinatorial optimization, signal processing, network training, and many other areas. Nowadays, PSO algorithms have also been developed to solve constrained problems, multi-objective optimization problems, problems with dynamically changing landscapes, and to find multiple solutions, while the original PSO algorithm was used mainly to solve unconstrained, single-objective optimization problems [29].Various areas where PSO is applied are listed in Table1 [5]: Table 1. Application areas of Particle Swarm Optimization The optimal control and design of phased arrays, broadband antenna design and modeling, reflector antennas, design of Yagi-Uda arrays, array failure correction, optimization of a reflect array antenna, far-field radiation pattern Antennas reconstruction, antenna modeling, design of planar Design antennas, conformal antenna array design, design of patch antennas, design of a periodic antenna arrays, near-field antenna measurements, optimization of profiled corrugated horn antennas, synthesis of antenna arrays, adaptive array antennas, design of implantable antennas. Pattern recognition of flatness signal, design of IIR filters, 2D IIR filters, speech coding, analogue filter tuning, Signal particle filter optimization, nonlinear adaptive filters, Costas Processing arrays, wavelets, blind detection, blind source separation, localization of acoustic sources, distributed odour source localization, and so on. Radar networks, bluetooth networks, auto tuning for universal mobile telecommunication system networks, optimal equipment placement in mobile communication, TCP network control, routing, wavelength divisionmultiplexed network, peer-to-peer networks, bandwidth and channel allocation, WDM telecommunication networks, wireless networks, grouped and delayed broadcasting, Networking bandwidth reservation, transmission network planning, voltage regulation, network reconfiguration and expansion, economic dispatch problem, distributed generation,
50
Biomedical
Robotics
microgrids, congestion management, cellular neural networks, design of radial basis function networks, feed forward neural network training, product unit networks, neural gas networks, design of recurrent neural networks, wavelet neural networks, neuron controllers, wireless sensor network design, estimation of target position in wireless sensor networks, wireless video sensor networks optimization. Human tremor analysis for the diagnosis of Parkinsons disease, inference of gene regulatory networks, human movement biomechanics optimization, RNA secondary structure determination, phylogenetic tree reconstruction, cancer classification, and survival prediction, DNA motif detection, biomarker selection, protein structure prediction and docking, drug design, radiotherapy planning, analysis of brain magneto encephalography data, electroencephalogram analysis, biometrics and so on. On-chip inductors, configuration of FPGAs and parallel processor arrays, fuel cells, circuit synthesis, FPGA-based temperature control, AC transmission system control, electromagnetic shape design, microwave filters, generic electromagnetic design and optimization applications, CMOS RF wideband amplifier design, linear array antenna synthesis, conductors, RF IC design and optimization, semiconductor optimization, high-speed CMOS, frequency selective surface and absorber design, voltage flicker measurement, shielding, digital circuit design. Control of robotic manipulators and arms, motion planning and control, odour source localization, soccer playing, robot running, robot vision, collective robotic search, transport robots, unsupervised robotic learning, path planning, obstacle avoidance, swarm robotics, unmanned vehicle navigation, environment mapping, voice control of robots, and so forth. Conceptual design, electromagnetics case, induction heating cooker design, VLSI design, power systems, RF circuit synthesis, worst case electronic design, motor design, filter design, antenna design, CMOS wideband amplifier design, logic circuits design, transmission lines, mechanical design, library search, inversion of underwater acoustic models, modeling MIDI music, customer satisfaction models, thermal process system identification, friction models, model selection, ultrawideband channel modeling, identifying ARMAX models, power plants and systems, chaotic time series modeling, model order reduction. Image segmentation, autocropping for digital photographs, synthetic aperture radar imaging, locating treatment
51
Optimization
planning landmarks in orthodontic x-ray images, image classification, inversion of ocean color reflectance measurements, image fusion, photo time-stamp recognition, traffic stop-sign detection, defect detection, image registration, microwave imaging, pixel classification, detection of objects, pedestrian detection and tracking, texture synthesis, scene matching, contrast enhancement, 3D recovery with structured beam matrix, character recognition, image noise cancellation. Automatic generation control, power transformer protection, power loss minimization, load forecasting, STATCOM power system, fault-tolerant control of compensators, hybrid power generation systems, optimal power dispatch, power system performance optimization, secondary voltage control, power control and optimization, design of power system stabilizers, operational planning for cogeneration systems, control of photovoltaic systems, large-scale power plant control, analysis of power quality signals, generation planning and restructuring, optimal strategies for electricity production, production costing, operations planning. Design of neurofuzzy networks, fuzzy rule extraction, fuzzy control, membership functions optimization, fuzzy modeling, fuzzy classification, design of hierarchical fuzzy systems, fuzzy queue management, clustering, clustering in large spatial databases, document and information clustering, dynamic clustering, cascading classifiers, classification of hierarchical biological data, dimensionality reduction, genetic-programming-based classification, fuzzy clustering, classification threshold optimization, electrical wader sort classification, data mining, feature selection. Electrical motors optimization, optimization of internal combustion engines, optimization of nuclear electric propulsion systems, floor planning, travelling-sales man problems, n-queens problem, packing and knapsack, minimum spanning trees, satisfiability, knights cover problem, layout optimization, path optimization, urban planning, FPGA placement and routing. Water quality prediction and classification, prediction of chaotic systems, streamflow forecast, ecological models, meteorological predictions, prediction of the floe stress in steel, time series prediction, electric load forecasting, battery pack state of charge estimation, predictions of elephant migrations, prediction of surface roughness in end milling, urban traffic flow forecasting, and so on.
52
CHAPTER 7
Conclusion
This thesis discussed the basic Particle Swarm Optimization algorithm, geometrical and mathematical explanation of PSO, particles movement and the velocity update in the search space, the acceleration coefficients and particles neighborhood topologies in Chapter 3. In Chapter 4, a set of convergence techniques, i.e. velocity clamping, inertia weight and constriction coefficient techniques which can be used to improve speed of convergences and control the exploration and exploitation abilities of the entire swarm, was illustrated. The Guaranteed Convergence PSO (GCPSO) algorithm was analyzed. This algorithm is very important to solve a problem when all particles face premature convergence or stagnation in the search process. Boundary conditions were presented which are very useful in the PSO algorithm. Chapter 5 presented five different types of PSO algorithms which solve different types of optimization problems. The Multi-Start PSO (MSPSO) algorithm attempts to detect when the PSO has found lack of diversity. Once lack of diversity is found, the algorithm re-starts the algorithm with new randomly chosen initial positions for the particles. The Multi-phase PSO (MPPSO) algorithm partitions the main swarm into sub-swarms or subgroups, where each sub-swarm performs a different task, exhibits a different behavior and so on. Then the swarms cooperate to solve the problem by sharing the best solutions they have discovered in their respective sub-swarms. During the optimization process, high speed of convergence sometimes generates a quick loss of diversity which lead to undesirable premature convergence. To solve this problem, the perturbed particle swarm algorithm (PPSO) illustrated in this chapter. The Multi-Objective PSO (MOPSO) algorithm is very important when an optimization problem has several objective functions. One discrete optimization problem was solved by the Binary PSO (BPSO) algorithm. The PSO algorithm has some problems that ought to be resolved. Therefore, the future works on the PSO algorithm will probably concentrate on the following: 1. Find a particular PSO algorithm which can be expected to provide good performance. 2. Combine the PSO algorithm with other optimization methods to improve the accuracy. 3. Use this algorithm to solve the non-convex optimization problems.
53
REFERENCES
[1] D. Bratton and J. Kennedy, "Defining a Standard for Particle Swarm Optimization ," in IEEE Swarm Intelligence Symposium, 2007, pp. 120-127. [2] X. Li and K. Deb, "Comparing lbest PSO Niching algorithms Using Different Position Update Rules," in In Proceedings of the IEEE World Congress on Computational Intelligence, Spain, 2010, pp. 1564-1571. [3] M. Dorigo and M. Birattar, "Swarm intelligence," in Scholarpedia, 2007, pp. 2(9):1462. [4] Andries P. Engelbrecht, Computational Intelligence: An Introduction.: John Wiley and Sons, 2007, ch. 16, pp. 289-358. [5] Riccardo Poli, "Review Article-Analysis of the Publications on the Applications of Particle Swarm Optimisation," Journal of Artificial Evolution and Applications, p. 10, 2008. [6] Singiresu S. Rao, Engineering Optimization Theory and Practice, 4th edition, Ed.: John Wiley and Sons, 2009. [7] El-Ghazali Talbi, Metaheuristics-From Design to Implementation.: John Wiley and Sons, 2009. [8] George I. Evers, An Automatic Regrouping Mechanism to Deal with Stagnation in Particle Swarm Optimization(Masters thesis), The University of Texas-Pan American., 2009, Department of Electrical Engineering. [9] Anthony Carlisle and Gerry Dozier, "An Off-The-Shelf PSO," in Workshop Particle Swarm Optimization, Indianapolis, 2001. [10] Nanbo Jin and Yahya Rahmat-Samii, "Advances in Particle Swarm Optimization for Antenna Designs: Real-Number, Binary, Single-Objective and Multiobjective Implementations," IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, vol. 55, no. 3, pp. 556-567, MARCH 2007. [11] F. van den bergh, An Analysis of Perticle Swarm Optimizers. PhD thesis, Department of Computer Science., 2006, University of Pretoria, Pretoria, South Africa. [12] Mohammad Teshnehlab and Mahdi Aliyari Shoorehdeli Mojtaba Ahmadieh Khanesar, "A Novel Binary Particle Swarm Optimization," in Proceedings of the 15th Mediterranean Conference on Control and Automation, Greece, July
54
,2007, p. 6. [13] V.Selvi and R.Umarani, "Comparative Analysis of Ant Colony and Particle Swarm Optimization techniques," International Journal of Computer Applications, , vol. 5, no. 4, pp. 1-6, 2010. [14] Kwang Y. Lee and Jong-Bae Park, "Application of Particle Swarm Optimization to Economic Dispatch Problem: Advantages and Disadvantages," IEEE, 2010. [15] Ajith Abraham, and Amit Konar Swagatam Das. (2008) www.softcomputing.net. [Online]. http://www.softcomputing.net/aciis.pdf [16] Ganesh Kumar,Salman Mohagheghi,Jean-Carlos Hernandez Yamille delValle, "Particle Swarm Optimization: Basic Concepts, Variants and Applications in Power Systems.," in IEEE, 2008, pp. 171-195. [17] Y. Rahamat-Samii and Shenheng Xu, "Boundary Conditions in Particle Swarm Optimization Revisited," IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, vol. 55, no. 3, pp. 760-765, March 2007. [18] S. M. Mikki and Ahmed A. Kishk, "Hybrid Periodic Boundary Condition for Particle Swarm Optimization," IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, vol. 55, no. 11, pp. 3251-3256, NOVEMBER 2007. [19] James Kennedy and Tim Blackwell Riccardo Poli, "Particle swarm optimization An overview," Swarm Intelligence, vol. 1, no. 1, pp. 3357, 2007. [20] A. P. Engelbrecht F. van den Bergh, "A New Locally Convergent Particle Swarm Optimiser," IEEE Conference onSystems, Man and Cybernetics, Tunisia, 2002. [21] Morten Lvbjerg and Thiemo Krink, "Extending Particle Swarm Optimisers with Self-Organized Criticality," IEEE Int.Congr. Evolutionary Computation, vol. 2, pp. 1588-1593, May 2002. [22] M. Fatih Tasgetiren and Yun-Chia Liang, "A Binary Particle Swarm Optimization Algorithm for Lot Sizing Problem," Journal of Economic and Social Research, vol. 5, no. 2, pp. 1-20, 2003. [23] L.M. Ruan, M. Shi, W. An and H.P. Tan H. Qi, "Application of multi-phase particle swarm optimization technique to inverse radiation problem," Journal of Quantitative Spectroscopy & Radiative Transfer 109 (3), pp. 476-493, 2008. [24] Zhao Xinchao, "A perturbed particle swarm algorithm for numerical optimization," Applied Soft Computing 10, pp. 119124, 2010.
55
[25] Lihua Gong and Zhengyuan Jia, "Multi-criteria Human Resource Allocation for Optimization Problems Using Multi-objective particle Swarm Optimization Algorithm," International Conference on Computer Science and Software Engineering, vol. 1, pp. 1187-1190, 2008. [26] Konstantinos E. Parsopoulos and Michael N. Vrahatis, Multi-Objective Particles Swarm Optimization Approaches, 2008. [27] M. R. Pontes, C. J. A. Bastos-Filho R. A. Santana, "A Multiple Objective Particle Swarm Optimization Approach using Crowding Distance and Roulette Wheel," IEEE Ninth International Conference on Intelligent Systems Design and Applications, pp. 237-242, 2009. [28] Tsung-Ying Sun, Sheng-Ta Hsieh, and Cheng-Wei Lin Shih-Yuan Chiu, "Cross-Searching Strategy for Multi-objective Particle Swarm Optimization," IEEE Congress on Evolutionary Computation, pp. 3135-3141, 2007. [29] Marco Dorigo et al., "Particle swarm optimization," Scholarpedia, vol. 3, no. 11, p. 1486, 2008.
56
ABC: Absorbing boundary condition. BPSO: Binary Particle Swarm Optimization. CSS-MOPSO: Cross-Searching Strategy Multi-Objective DBC: Damping boundary condition. DNPSO: Dynamic Neighborhood Particle Swarm Optimization. gbest PSO: Global Best Particle Swarm Optimization. GCPSO: Guaranteed Convergence Particle Swarm Optimization. IBC: Invisible boundary condition. I/DBC: Invisible/Damping boundary condition. I/RBC: Invisible/Reflecting boundary condition. lbest PSO: Local Best Particle Swarm Optimization. : Pareto front. PPSO: Perturbed Particle Swarm Optimization. PSO: Particle Swarm Optimization. RBC: Reflecting boundary condition. MOPSO: Multi-Objective Particle Swarm Optimization. MPPSO: Multi-phase Particle Swarm Optimization. MSPSO: Multi-Start Particle Swarm Optimization. VEPSO: Vector Evaluated Particle Swarm Optimization.
57
The function being minimized or maximized. It takes a vector input and returns a scalar value. The velocity vector of particle in dimension The position vector of particle in dimension at time at time found from
The personal best position of particle in dimension initialization through time t. The global best position of particle in dimension initialization through time t. ,
found from
Positive acceleration constants which are used to level the contribution of the cognitive and social components respectively. , Random numbers from uniform distribution : Two chosen iteration . at time t.
The swarm size or number of particles. Denotes time or time steps. D: P: N: The maximum no. of dimensions. The maximum no. of particles. Total number of iterations. The maximum velocity. The inertia weight. The constriction coefficient. The parameter controls the diameter of the search space. The criticality of the particle . The global criticality limit. The -th dimension of p-gbest in iteration .
58
The normal distribution. The degree of uncertainty about the optimality of the gbest and is modeled as some non-increasing function of the number of iterations. The velocity of the -th particle in the -th swarm. The best position found for any particle in the -th swarm which is evaluated with the -th objective function. The sigmoid function.
59