0% found this document useful (0 votes)
54 views

07 126 Zhi Wuhan University Particle Swarm Optimization Algorithm

1) The document presents an improved particle swarm optimization (PSO) algorithm to solve mixed integer nonlinear programs (MINLP). 2) The main improvements include introducing backup particles that move randomly in valid solution spaces, and a particle substitution strategy that replaces particles stuck in local optima. 3) Experimental results on classical test problems show the improved algorithm has both faster convergence and higher accuracy than traditional PSO methods for solving MINLP problems.

Uploaded by

Arman
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

07 126 Zhi Wuhan University Particle Swarm Optimization Algorithm

1) The document presents an improved particle swarm optimization (PSO) algorithm to solve mixed integer nonlinear programs (MINLP). 2) The main improvements include introducing backup particles that move randomly in valid solution spaces, and a particle substitution strategy that replaces particles stuck in local optima. 3) Experimental results on classical test problems show the improved algorithm has both faster convergence and higher accuracy than traditional PSO methods for solving MINLP problems.

Uploaded by

Arman
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

An Improved Particle Swarm Optimization Algorithm for MINLP Problems

Wang Zhi Ma, Jian-Jun Liu Zhao

Department of Computer Department of Science, Department of Computer


Science and Technology, Wuhan University of Science and Technology,
Wuhan University of Science and Technology, Wuhan University of
Science and Technology, 430081, Hubei, China Science and Technology,
430081, Hubei, China 430081, Hubei, China
Email: [email protected] Email: [email protected] Email: [email protected]

Abstract
function, and g i (X,Y), h i (X,Y) are the nonlinear
constrained functions . MINLP is a NP-complete
This paper, to solve the MINLP, presents an problem which has been seen as a very complicated
improved algorithm of PSO. The main characteristics problem until now. But the solution of MINLP is
of the improved algorithm includes: the introduction of possible with the development of computer technology.
backup-particles and the proposal of particles In references, to solve MINLP, there are generally
substitution strategy which improves the learning three methods: branch—and—bound(B&B),
ability and updating velocity of particles. It proved by Generalized Benders Decomposition(GBD) and
the classical values experiments that the improved Outside Approximation(OA). Aiming to deal with the
algorithm possesses the features of accuracy and quick limitations of the above three algorithms, this paper
convergence at the same time. provides an improvement of PSO with the addition of
Keywords: MINLP( Mixed-Integer Nonlinear substitution function and the enhancement of learning
Programs) PSO( Particle Swarm Optimization) ability of particles, thus making the improved
Evolutionary Computation(EC) algorithm deal with MINLP with higher efficiency and
better results.

1 Introduction 2 PSO( Particle Swarm Optimization)


MINLP model refers to a kind of complicated
nonlinear program problems which contain both the PSO, proposed by Eberhart and Kennedy in 1995,
integer variables and continuous variables. is A Global Optimization Evolutionary Algorithm,
The general form of MINLP is: originating from the imitation of food-looking of birds.
Minimize f(X,Y), Subject to: The brief description of PSO is: A swarm of
particles is initialized at random in a certain space in
g i (X,Y)<=0, i =1,2,3,…,j; h i (X,Y)=0, i which the places of particles stand for possible
=j+1,j+2,…,k; solutions and every particle is flying at a certain
velocity. By flying many times, that is, iteration, the
upper
X lower ≤X≤X , Y lower ≤Y≤Y upper
. there in : swarm of particles gradually approaches to the optimal
p q p place, thus finding the optimal solution .In each
X∈R , Y ∈N , p + q=n. R is P-dimensional iteration, particles update themselves by two
q
real number space, N is Q-dimensional integral extremums: One is the optimal solution found by a
number space. f(x, y) is the nonlinear objective particle itself, called pBest, the other is the current
optimal solution found by the swarm, called gBest.
Particles update their velocities and places on the
basis of these two extremums:
V=ϖ *V + C1* rand() * (pBest-X) + C2*
rand()*(gBest-X) (1)

X=X+V (2)
V is the velocity of a particle, X is the place of the
current particle, pBest and gBest are defined in the
above, rand() is any random value in (0, 1), C1 and C2
are learning genes. Usually C1=C2=2.
Chart 1 is the flowchart of PSO.

Chart 2. A Improved PSO Algorithm

3 The Improved PSO Algorithm


There are several drawbacks with PSO when
dealing with MINLP: The one is : easily falling into
the local optimal solution ,The other is : inefficiency.
So, this paper proposes the particles substitution
strategy which improves particles velocity updating
strategy to enhance their learning abilities and the
flexibility of searching in global solution space. Chart
2 is the flowchart of the improved PSO algorithm.
Chart 1. PSO Algorithm
3.1 Particles Substitution Strategy Initialize backup-particles;
Initialize on-duty-particles;
The particles falling into a local optimal solution While (not terminated)
can hardly jump out of it by general moving strategies. Do {
So they have to be substituted by new particles, that is, For each on-duty-particle {
substituting the particles falling into the local optimal Calculate fitness value
solution with particles in the legitimate solution space. If the fitness value is better than the best
However, the generation of new particles and fitness value (p-best) in history
legitimate solutions often need a high cost, especially Set current value as the new p-best
when the constrained conditions are harsh and when Else If (without hope)
there are a large number of variables of the constrained
for each backup-particle // the maintenance and
inequality. Under the consideration of this, the paper
updating of backup-particles
suggests we establish a dynamic backup-particle
reserve in which the particles move randomly in { if( valid(present[]+v[]) )
legitimate solution space. And when needed, particles present[]=present + v[];
can be chosen from the backup-particles to replace the else while (not valid(present[]+v[]) )
particles falling into the local optimal solution to do{ v[]=rand()% L[];} // generate a motional
search the global optimal solution. In this case, on one vector of short step in random
hand the cost to generate basic legitimate solution can
be reduced, on the other hand, the backup-particles or if(used times > N) // N is an adjustable constant
their tracks in legitimate solution space are well while (not valid(present[]))
distributed so that the global search of the algorithm is
ensured. do {present[]=rand();}

3.2 The substitution of particles is shown in the }


following chart 3
}

Choose the particle with the best fitness value of all the
particles as g-best
For each particle {
Calculate particle velocity according equation (a)
Update particle position according equation (b)
}
Chart 3. A substitution Strategy }

3.3 The Particles Velocity Updating Strategy 4 The experimental results and
comparative analysis
Different from the traditional particles velocity
updating strategy of PSO, the improved algorithm We select three classical testing problems to do
divides velocity into two respects: direction and step; values experiments on the sake of testing the efficiency,
and respectively establishes the relevant alterative velocity and accuracy of the new PSO. The condition
strategy and testing methods. Within these strategies, for experiments is: PII-366 CPU , 256M memory and
the direction and step, as well as whether there is need Windows XP operating systems.
to adopt a new testing method to generate a new value Question1.
are determined mainly on the basis of the particles’ Minimize f (X,Y)
experience, the times of successes and failures and the =0.6224*(0.0625*y1)*x1*x2+1.7781*(0.0625*y2)*
obtained results and so on. (x1)2+3.1661* (0.0625*y1)2*x2 +19.84* (0.0625*y1) 2
*x1;
3.4 The pseudocode of the improved PSO Constrained conditions:
algorithm is as follows g1 (X,Y) =0.0193* x1- 0.0625*y1 ≤ 0 ;
g2 (X,Y) =0.00954*x1 -0.0625*y2 ≤ 0 ;
g3 (X,Y) =750*1728-π* (x1) 2 *x2 -4/ 3*π*(x1) 3 ≤ 0 ;
g4(X,Y) =x2 -2 40≤ 0 .
this testing problem is proposed by Reference[2] Table 2 Experiment Result of Question2
and has been dealt with by Reference[3, 4, 5, 6]
Question2.
Min f(x1,x2,x3,y1,y2,y3,y4)=(y1-1)2+(y2-1) 2+(y3-
1) -ln(y4+1)+(x1-1) 2+(x2-2) 2+(x3-3) 2.
2

Constrained conditions :
y1+y2+y3+x1+x2+x3 ≤ 5;
(y3) 2 + (x1) 2 + (x2) 2+ (x3) 2 ≤ 5.5;
y1 + x1 ≤ 1.2;
y2 + x2 ≤ 1.8;
y3 + x3 ≤2.5; y4 + x1 ≤ 1.2;
(y2) 2 + (x2) 2 ≤ 1.64; Table 3 Experiment Result of Question3
(y3) 2 + (x3) 2 ≤ 4.25;
(y2) 2 + (x3) 2≤ 4.64;
x1, x2, x3 ≥0;
y1, y2, y3, y4 ∈{0, 1};
This problem has been dealt with by Reference [7,
8, 9, 10, 11]
Question3.
n n
| ∑ cos 4 ( xi ) − 2∏ cos 2 ( xi ) |
i =1 i =1
n

∑ ix
i =1
2
i In Question1, this algorithm calculated the current
Max f (xi) = , optimal solution with a shorter time and more accuracy.
Constrained conditions: In Question2, the solution is more accurate than
0<xi<10; current solutions of other algorithms.
n In Question3, this algorithm calculated the optimal
∏x i solution in a shorter time, in the mean while, it makes
Therein ,i=1,2,…,n; 0.75≤ i =1 ≤ 0.75n; clear some other points also take the same optimal
The above problem is called BUMP which is fist solution (0.36497974587066). The solutions of these
proposed by Keane[12] in optimal structure design in questions are:
1994. Because BUMP possesses three superior (1.60086004652328,0.46849805155024);(1.60086040
features(super nonlinearity, super multi-peak, super 960895;0.46849806235336);
high-dimensional), it has become an internationally (1.60086043325990;0.46849805543182);(1.60086044
universal testing problem for measuring algorithm 189444;0.46849805290488);
optimization. (1.60086046865024;0.46849804507470);(1.60086046
The paper uncovers the results of ten experiments 892781;0.46849804499346); and so on.
by the improved PSO for the above problems. The
numbers of on-duty-particles and backup-particles are 5 Conclusion
all set as 30, and the results are as follows:
The paper provides an improved PSO basing on the
Table 1 Experiment Result of Question1 analysis of PSO’s basic principle of work and presents
the application of the improved PSO to MINLP.
Experiments show that the improved PSO algorithm is
both faster in convergence and more accurate in
solution. The use of the improved PSO is convenient
because only the fitness function, the expressions of
constrained conditions and the limits of its variables
are asked to input for different problems. In all, this
algorithm is a very effective one to deal with MINLP
and other optimization problems.
[6] A. Carlos, Cello Cello. Self-adaptive Penalties for GA-
References based Optimization. Proc of the Congress on Evolutionary
Computation. Washington:IEEEPress,1999.537~580
[7] C. A. Floudas, A. Aggarwal, and A. R. Ciric, Global
[1] I.E. Grossmann, N.V. Sahinidis. Special Issue on Mixed Optimum Search for Nonconvex NLP and MINLP problems.
Integer Programming and It’s Application to Engineering, Computers & Chemical Engineering, 13 (10), 1117-1132,
Optim. Eng, 3 (4), Kluwer Academic Publishers, Netherlands. 1989.
2002. [8] H. S. Ryoo, B. P. Sahinidis. Global Optimization of Non-
[2] E. Sandgren. Nonlinear Integer and Discrete convex NLPs and MINLPs with Application in Process
Programming in Mechanical Design. ASME Journal Design. Computers & Chemical Engineering, 19, 551, 1995.
Mechanical Design, 1990 , 112 (2 ):223~229 [9] L. Costa, P. Olivera. Evolutionary Algorithms Approach
[3] B.K. Kannan, S.N. Kramer. An Augmented Lag Range to The Solution of Mixed Integer Nonlinear Programming
Multiplier Based Method for Mixed Integer Discrete Problems. Computers & Chemical Engineering, 25, 257-266,
Continuous Optimization and Its Applications to Mechanical 2001.
Design. Journal of Mechanical Design. Transactions on the [10] M. F. Cardoso, R. L. Salcedo, S. Feyo de Azevedo. A
ASME,1994,116(2):318~320 Simulated Annealing Approach to The Solution of MINLP
[4] Y.J. Cao, Q.H. Wu. Mechanical Design Optimization by Problems. Computers & Chemical Engineering, 21, 1349-
Mixed-variable Evolutionary Programming. Proc of the 1997 1364, 1997.
Int'l Conf on Evolutionary Computation. Indianapolis: IEEE [11] R. L. Salcedo. Solving Non-convex Nonlinear
Press,1997.443~446 Programming Problems with Adaptive Random Search.
[5] Yung Chienlin. A Hybrid Method of Evolutionary Industrial & Engineering Chemistry Research, 31, 262, 1992.
Algorithms for Mixed-integer Nonlinear Optimization [12] A. J. Keane. Experience with Optimizers in Structural
Problem. Proc of Congress on Evolutionary Computation. Design. The Conf on Adaptive Computing in Engineering
Piscataway, NJ:IEEE Press,1999.2159~2166 Design and Control, PEDC, Plymouth, 1994.

You might also like