0% found this document useful (0 votes)
110 views

Notes On PSO

The document summarizes a study on improving the particle swarm optimization (PSO) algorithm and applying it to structural optimization problems. PSO is inspired by swarm behavior in nature and updates particle positions based on individual and group memories. The study tests different parameters in PSO, finding that equal individual and social weights with a dynamic inertia weight leads to fast, global convergence. PSO is then successfully applied to optimize truss structures with different numbers of nodes, subject to stress and displacement constraints, finding solutions comparable to other methods.

Uploaded by

Allan Marbaniang
Copyright
© © All Rights Reserved
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
110 views

Notes On PSO

The document summarizes a study on improving the particle swarm optimization (PSO) algorithm and applying it to structural optimization problems. PSO is inspired by swarm behavior in nature and updates particle positions based on individual and group memories. The study tests different parameters in PSO, finding that equal individual and social weights with a dynamic inertia weight leads to fast, global convergence. PSO is then successfully applied to optimize truss structures with different numbers of nodes, subject to stress and displacement constraints, finding solutions comparable to other methods.

Uploaded by

Allan Marbaniang
Copyright
© © All Rights Reserved
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 4

NOTES

=== ABSTRACT ===


• Show the background and implementation of PSO.
• Improvements to the algorithm along with seeing what effects what
• Compare with trusses basically and that it works pretty good.

=== INTRODUCTION ===


• Gradient based , convex and design based ??
• non gradient probabilistic algorithms developed by copying natural phenomenon. Like GA
• Family of optimization developed on species looking for food
• Ant colony and PSO(Birds)
• PSO first proposed by Kennedy and Eberhart.
• Bsed on social interactions in nature. That move in groups or swarms.
• Main idea is that there is social sharing and that gives an evolutionary advantage. For eg. If
one bird finds food, then others can go to the same place.
• ROBUST method (Convex and Non convex spaces)
• Efficient
• Does not need any information on the domain space, or manipulations to handle constraints
• Can be parallelized.
• Paper basically tries to modify and study the PSO, and solve a truss.

=== PSO ===


• Particle swarm is stohastic in nature
• It uses a velocity vector and the memory of each particle and the knowledge of the whole.
Basically the previous best position of a particle and of the whole.
• Thus position is updated based on behaviour of the whole swarm and promising places.

1.
• We can see time dependant on t, which is the iteration space
• Estimate at k+1, based on previous x and predicted velocity.

• Now velocity depends on this formula, where


• 1st term: w gives the inertia weight or the amount it can move through the space as a
factor of its previous velocity. Large w means wide spaces get explored
• 2nd term: depends on the 'memory' of the local particle'. c1(cognitive) is how much it
trusts in itself, r is a random factor, and the difference is the difference in the best
position the particle has ever had and where it was in k step.
• 3rd term; the same but just that it is relative to the whole swarm c2(social). Important
as it gives the exploration abilities of the swarm. Large inertia weights allow for wide
• r are random numbers between 0 and 1
• Steps are:
• Initialize a initial set of particle positions and velocities randomly distributed
bounded by limits
• Evaluate an objective function f(x) using the initial x
• Update pi, and pg. Best position of local and global
• Update x and v, loop till convergence is met
• Initial predictions can be found on the Limits as be defined dependant on the limits
xmin and xlim
•Putting 2 in 1, we can get

3.
which follows a general gradient line search ( Like Newton, secant, not bisection) [ xn+1 = xn +
step*alpha
Where first two terms can be atributed to the previous step, and the third term: factor is the step size
multiplied by the direction. Keeping r1 and r2 as 0 and 0 and 1,1 we can find the minimum and
maximum of the factor and the direction. (Check paper)
• For convergence, we put both 1 and 2 equations in matrix form .
• so when it converges xk+1 = xk at k = infinity,
• so we find that it only converges when our velocity =0 and the current position, local best
and global best are the same.

=== PSO improvements ===


• Inertial weight
• Now w can be kept constant or can change dynamically as the no of increments
increase.
• So we can start with a high w, so that a lot of the space is covered and then decrease
it so that the exploration happens in small feasible areas.
• One formulation can be
• They can be nonlinear or even wk+1=kw*wk where kw is a factor.
• Violation in design points redirection
• The points that violate the design space, have the previous velocity term removed
and only the interaction with the local and global best values are included. This
insures that vector will point again to a feasible region. So 1st term in Eq2 not there

4.
• Constraints
• Average of the objective function and level of violation of each constraint during
each iteration.
where f(x) is the objective function, m is the number of
constraints, g i (x) is a specific constraint value (with violated constraints having values larger than
zero), fbar is the average of the objective function values in the current swarm,and gbar(I) is the
violation of the lth constraint averaged overthe current population.
• The expression distributes the penalty parameter in such that hard constraints have bigger
penalties. So if x is withing the constraints , the the onjective function is f(x). Otherwise f(x)
is increased with a penalty function. suppose for 1 constraint is not satisfied then g1 >0. So a
particle with a violation (gi) equal average of the violations of all (gi bar) , then k(penalty) =
fbar/gbar in Equation 20. So the penalty is large for that individual if it contributes to most
of the violations.

=== Numerical Studies ===


1. Six node truss:
• Basic six node truss as given in the paper. with vertical downward poin loads.
Constraints are for stresses and displacements at the nodes.
• Social and cognitive parameters
• are permuted between 0 and 4 and it was found that when only social or only
cog are taken, the convergence is fast., but do not improve after the initail
convergence. SO local suboptimal position.
• Higher emphasis on social (c2) shows that it converges to better locations, but
again gets stuck. THis is due to overreliance on the group swarm and not on
individual information, like when some individual finds a good place.
• When c1 and c2 are equal or when c1 is slightly higher, we can see a
convergence near the gloal optimum. As they trust in their own information
and finding better regions but also having information of the global.
• Inertia weight showed the dynamic one to show faster convergence. So keeping c1
and c2 equal and using dynamic weight shows good convergence keeping in mind
the stability requirements.
• Results showed PSO showed good results. Weight optimized by this method is
comparable to other methods done by other literature. With 2 active constraints at
this minimum.
2. 25 node - constraints were mimimum cross sectionall area, displacment at each node and
allowable stress.
3. 72 bar - desin variables as cross sectional area. Constarints on maximum allowable
dispalcement and aloowable stressdd

=== Conclusion===
• PSO mimics social behaviour of animals in a flock.
• Individual and group memory to updae each particle position allowing global and local
search optimization.
• PSO can be formulated with a stochastic length and search.
• Effects of social and individual parameters and dynamic inertia was studied. c1=c2 was
found favourable. Global convergence and fast.
• PSO can find better or same structural optimization for differnt tasks.

You might also like