0% found this document useful (0 votes)
13 views6 pages

Optimization Techniques in ML

The document provides an overview of Particle Swarm Optimization (PSO), an optimization technique inspired by natural swarming behavior. It explains how PSO works, including the roles of personal and global best solutions, and provides a Python implementation example to minimize a function. Additionally, it highlights various applications of PSO across different fields such as engineering, robotics, healthcare, finance, and deep learning.

Uploaded by

itzcyberrohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views6 pages

Optimization Techniques in ML

The document provides an overview of Particle Swarm Optimization (PSO), an optimization technique inspired by natural swarming behavior. It explains how PSO works, including the roles of personal and global best solutions, and provides a Python implementation example to minimize a function. Additionally, it highlights various applications of PSO across different fields such as engineering, robotics, healthcare, finance, and deep learning.

Uploaded by

itzcyberrohan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Student ID: 5131342

Name: Rohan Gupta

Course: Computer Science

Class: Msc CS Part 1 Sem 2

Date: 10/02/2025

Total Pages: 04

Particle Swarm Optimization (PSO) – A Simplified Approach


1. Introduction to PSO
Particle Swarm Optimization (PSO) is a powerful optimization
technique inspired by the way birds flock or fish move together. It is
widely used to solve complex problems by continuously refining
potential solutions based on both individual and collective
experience.

Key Goals of PSO:


- Identify the best possible solution within a given search space.
- Enhance computational efficiency in optimization tasks.
- Leverage swarm intelligence to explore multiple solutions
simultaneously.
- Maintain a balance between discovering new solutions
(exploration) and refining existing ones (exploitation).

2. How PSO Works


In PSO, a group of "particles" (potential solutions) moves through a
defined solution space. Each particle adjusts its position based on:
- Personal best (pbest): The best solution the particle has found so
far.
- Global best (gbest): The best solution discovered by any particle in
the swarm.

Each particle’s velocity is updated using the formula:

\[
v_i(t+1) = w \cdot v_i(t) + c_1 \cdot r_1 \cdot (pbest_i - x_i) + c_2 \
cdot r_2 \cdot (gbest - x_i)
\]

\[
x_i(t+1) = x_i(t) + v_i(t+1)
\]

Where:
- \( v_i \) – Velocity of the particle
- \( x_i \) – Position of the particle
- \( w \) – Inertia weight (controls the balance between exploration
and exploitation)
- \( c_1, c_2 \) – Acceleration coefficients
- \( r_1, r_2 \) – Random numbers between 0 and 1
- \( pbest_i \) – Best position found by an individual particle
- \( gbest \) – Best position found by the entire swarm

3. Implementing PSO in Python


Let's see how PSO can be implemented to minimize a simple
function:

\[
f(x, y) = x^2 + y^2
\]

This function is minimized when \( x = 0 \) and \( y = 0 \).

#Python Code for PSO:


```python
import numpy as np

# Define the function to minimize


def objective_function(position):
x, y = position
return x2 + y2

# PSO Parameters
num_particles = 10
num_iterations = 100
w = 0.5
c1, c2 = 1.5, 1.5
# Initialize particles
particles = np.random.uniform(-10, 10, (num_particles, 2))
velocities = np.random.uniform(-1, 1, (num_particles, 2))

# Initialize best positions


pbest_positions = np.copy(particles)
pbest_scores = np.array([objective_function(p) for p in particles])
gbest_position = particles[np.argmin(pbest_scores)]
gbest_score = min(pbest_scores)

# PSO Algorithm
for _ in range(num_iterations):
for i in range(num_particles):
r1, r2 = np.random.rand(), np.random.rand()
velocities[i] = (
w * velocities[i]
+ c1 * r1 * (pbest_positions[i] - particles[i])
+ c2 * r2 * (gbest_position - particles[i])
)
particles[i] += velocities[i]

new_score = objective_function(particles[i])
if new_score < pbest_scores[i]:
pbest_positions[i] = particles[i]
pbest_scores[i] = new_score
if new_score < gbest_score:
gbest_position = particles[i]
gbest_score = new_score

# Display results
print(f"Optimal solution found at: {gbest_position}")
print(f"Minimum function value: {gbest_score}")
```

4. Understanding the Results


#Expected Outcome:
The PSO algorithm will iteratively refine the positions of particles,
aiming to minimize the function \( f(x, y) = x^2 + y^2 \). Since the
minimum value of this function is 0, the expected output should be
close to (0,0).

#Example Output:
```
Optimal solution found at: [0.0023, -0.0018]
Minimum function value: 0.0000052
```
This result indicates that the algorithm successfully converged to a
solution very close to the global minimum.
5. Applications of PSO
PSO is used across multiple fields to solve complex optimization
problems, including:

- Engineering: Fine-tuning machine learning parameters.


- Robotics: Path optimization for autonomous robots.
- Healthcare: Enhancing medical imaging and diagnostics.
- Finance: Portfolio optimization and risk management.
- Deep Learning: Tuning hyperparameters for neural networks.

Final Thoughts
Particle Swarm Optimization is a simple yet effective technique that
mimics natural swarm intelligence to solve optimization problems
efficiently. Its adaptability makes it useful in various domains, from
artificial intelligence to engineering and beyond.

You might also like