0% found this document useful (0 votes)
4 views

Soft Computing Lab File

The Soft Computing Lab Manual outlines seven experiments focusing on various soft computing techniques including Rough Sets, Fuzzy Logic, Discretization, Genetic Algorithms, and Ant Colony Optimization. Each experiment includes theoretical background, Python code implementations, and practical applications for data processing and optimization. The manual serves as a comprehensive guide for students to understand and apply soft computing methods.

Uploaded by

adityansut613
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Soft Computing Lab File

The Soft Computing Lab Manual outlines seven experiments focusing on various soft computing techniques including Rough Sets, Fuzzy Logic, Discretization, Genetic Algorithms, and Ant Colony Optimization. Each experiment includes theoretical background, Python code implementations, and practical applications for data processing and optimization. The manual serves as a comprehensive guide for students to understand and apply soft computing methods.

Uploaded by

adityansut613
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Soft Computing Lab Manual

Index

1. Experiment 1: Implement Rough Sets and Fuzzy Logic


2. Experiment 2: Implement Discretization on a Continuous Dataset
3. Experiment 3: Implement Decisive Discernibility Matrix and Function for Decision
Rules
4. Experiment 4: Write and Implement Pseudocode for Discretization in Rough Sets
5. Experiment 5: Implement Arithmetic Operations in Fuzzy Sets
6. Experiment 6: Implement Genetic Algorithm for Function Optimization
7. Experiment 7: Implement Ant Colony Optimization for Pathfinding
Experiment 1: Implement Rough Sets and Fuzzy Logic

Theory:

Rough Sets are a mathematical tool to deal with vagueness and uncertainty, introduced by Zdzisław
Pawlak. The core idea is to approximate a set by a pair of lower and upper bounds using equivalence
relations on the data. Objects that surely belong to a concept form the lower approximation, while
objects that possibly belong form the upper approximation.

Fuzzy Logic, developed by Lotfi Zadeh, extends classical logic by allowing truth values between 0 and
1. A fuzzy set represents data where elements have degrees of membership. This logic is useful when
boundaries between concepts are not crisp.

Python Code:

# Rough Set Example

def lower_upper_approx(universe, target_set, equivalence_classes):

lower = set()

upper = set()

for eq_class in equivalence_classes:

if eq_class.issubset(target_set):

lower.update(eq_class)

if eq_class & target_set:

upper.update(eq_class)

return lower, upper

U = {1, 2, 3, 4, 5, 6}

target = {1, 2, 3}

equivalence = [{1, 2}, {3, 4}, {5, 6}]

lower, upper = lower_upper_approx(U, target, equivalence)

print("Lower Approximation:", lower)

print("Upper Approximation:", upper)

# Fuzzy Set Operations

A = {'cold': 0.3, 'warm': 0.7, 'hot': 0.9}

B = {'cold': 0.4, 'warm': 0.6, 'hot': 0.8}

# Intersection (min)
intersection = {x: min(A[x], B[x]) for x in A}

# Union (max)

union = {x: max(A[x], B[x]) for x in A}

print("Fuzzy Intersection:", intersection)

print("Fuzzy Union:", union)


Experiment 2: Implement Discretization on a Continuous Dataset

Theory:

Discretization is a preprocessing technique in which continuous data attributes are divided into
discrete intervals or categories. It is particularly important in rough set theory where attribute values
need to be categorical.

Python Code:

import pandas as pd

import numpy as np

def equal_width_discretization(data, bins):

return pd.cut(data, bins=bins, labels=False)

data = pd.Series([2.5, 3.6, 4.1, 5.0, 6.7, 7.8])

discrete_data = equal_width_discretization(data, 3)

print("Discretized Data:\n", discrete_data)

Experiment 3: Implement Decisive Discernibility Matrix and Function for Decision Rules

Theory:

The discernibility matrix is a table that identifies which attributes differentiate objects with different
decisions. The discernibility function, derived from this matrix, is expressed in disjunctive normal
form (DNF) and helps in determining reducts and decision rules.

Python Code:

import numpy as np

def discernibility_matrix(data, decision):

n = len(data)

matrix = [[[] for _ in range(n)] for _ in range(n)]

for i in range(n):
for j in range(i+1, n):

if decision[i] != decision[j]:

matrix[i][j] = [attr for attr in range(len(data[0])) if data[i][attr] != data[j][attr]]

return matrix

data = [[1, 0], [1, 1], [0, 1]]

decision = [1, 1, 0]

dm = discernibility_matrix(data, decision)

print("Discernibility Matrix:")

for row in dm:

print(row)

Experiment 4: Write and Implement Pseudocode for Discretization in Rough Sets

Theory:

In rough set theory, discretization is required to transform continuous attributes into intervals. This
enables equivalence class formation and approximation calculations. Proper discretization enhances
rule generation and accuracy.

Pseudocode:

1. Sort the dataset by the selected attribute and the decision.

2. Identify boundary points where decision changes.

3. Calculate midpoints of these boundaries to find cut points.

4. Assign intervals or bins based on these cut points.

Python Code:

import pandas as pd

def find_cut_points(df, attribute, decision_attribute):

sorted_df = df[[attribute, decision_attribute]].sort_values(by=attribute).reset_index(drop=True)

cut_points = []
for i in range(len(sorted_df) - 1):

if sorted_df.loc[i, decision_attribute] != sorted_df.loc[i + 1, decision_attribute]:

val1 = sorted_df.loc[i, attribute]

val2 = sorted_df.loc[i + 1, attribute]

cut_points.append((val1 + val2) / 2.0)

return sorted(set(cut_points))

def discretize_attribute(value, cut_points):

for i, cp in enumerate(cut_points):

if value <= cp:

return f"bin_{i}"

return f"bin_{len(cut_points)}"

def discretize_dataset(df, attributes, decision_attribute):

discretized_df = df.copy()

for attribute in attributes:

cuts = find_cut_points(df, attribute, decision_attribute)

discretized_df[attribute] = df[attribute].apply(lambda x: discretize_attribute(x, cuts))

return discretized_df

# Example usage

data = {

'Attribute1': [1.2, 2.5, 3.1, 3.8, 4.5, 5.2],

'Attribute2': [10, 20, 20, 30, 40, 50],

'Decision': ['A', 'A', 'B', 'B', 'C', 'C']

df = pd.DataFrame(data)

attributes = ['Attribute1', 'Attribute2']

discretized_df = discretize_dataset(df, attributes, 'Decision')

print(discretized_df)
Experiment 5: Implement Arithmetic Operations in Fuzzy Sets

Theory:

Fuzzy set arithmetic enables combining or comparing fuzzy values using operations such as union,
intersection, complement, addition, subtraction, multiplication, and division.

 Union (A ∪ B): max(μA(x), μB(x))

 Intersection (A ∩ B): min(μA(x), μB(x))

 Complement: 1 - μA(x)

 Addition: min(1, μA(x) + μB(x))

 Subtraction: max(0, μA(x) - μB(x))

 Multiplication: μA(x) * μB(x)

 Division: μA(x) / μB(x), provided μB(x) ≠ 0

Python Code:

import numpy as np

U = np.linspace(0, 10, 11)

A = {x: round(np.exp(-abs(x - 3) / 3), 2) for x in U}


B = {x: round(np.exp(-abs(x - 7) / 2), 2) for x in U}

def fuzzy_add(A, B):

return {x: min(1, A[x] + B[x]) for x in A}

def fuzzy_subtract(A, B):

return {x: max(0, A[x] - B[x]) for x in A}

def fuzzy_multiply(A, B):

return {x: A[x] * B[x] for x in A}

def fuzzy_divide(A, B):

return {x: A[x] / B[x] if B[x] != 0 else 0 for x in A}

print("Addition:", fuzzy_add(A, B))

print("Subtraction:", fuzzy_subtract(A, B))

print("Multiplication:", fuzzy_multiply(A, B))

print("Division:", fuzzy_divide(A, B))

Experiment 6: Implement Genetic Algorithm for Function Optimization

Theory:

Genetic Algorithms (GAs) simulate natural evolution. They maintain a population of candidate
solutions and evolve them through:

 Selection based on fitness


 Crossover for combination

 Mutation for diversity

Python Code:

import random

def fitness(x):

return x ** 2

def create_chromosome():

return random.randint(0, 31)

def crossover(p1, p2):

point = random.randint(1, 4)

mask = (1 << point) - 1

return (p1 & mask) | (p2 & ~mask & 0b11111)

def mutate(chrom, rate=0.1):

if random.random() < rate:

bit = 1 << random.randint(0, 4)

chrom ^= bit

return chrom

def genetic_algorithm(generations=50, size=10):

pop = [create_chromosome() for _ in range(size)]

for g in range(generations):

pop = sorted(pop, key=fitness, reverse=True)

print(f"Gen {g}: Best = {pop[0]}, f(x) = {fitness(pop[0])}")

next_gen = pop[:size//2]

while len(next_gen) < size:

p1, p2 = random.sample(next_gen, 2)

child = mutate(crossover(p1, p2))


next_gen.append(child)

pop = next_gen

return pop[0]

best = genetic_algorithm()

print(f"Best Solution: x = {best}, f(x) = {fitness(best)}")


Experiment 7: Implement Ant Colony Optimization for Pathfinding Problem

Theory:

Ant Colony Optimization (ACO) is a probabilistic technique that mimics the behavior of ants laying
down pheromone trails to discover shortest paths in a graph.

Python Code:

import numpy as np

def ant_colony_optimization(dist, n_ants=5, n_iter=10, alpha=1, beta=2, evap=0.5):

n = dist.shape[0]

pheromone = np.ones_like(dist)

best_path = None

best_length = float('inf')

for _ in range(n_iter):

for _ in range(n_ants):

path = [0]

while len(path) < n:

i = path[-1]

probs = [(pheromone[i][j] ** alpha) * ((1 / dist[i][j]) ** beta) if j not in path else 0 for j in
range(n)]

probs = np.array(probs)

probs = probs / probs.sum()

path.append(np.random.choice(range(n), p=probs))

length = sum(dist[path[i]][path[i+1]] for i in range(n-1)) + dist[path[-1]][path[0]]

if length < best_length:

best_length = length

best_path = path

pheromone *= (1 - evap)

for i in range(n - 1):


pheromone[path[i]][path[i+1]] += 1 / length

return best_path, best_length

matrix = np.array([[0, 2, 2, 5], [2, 0, 3, 4], [2, 3, 0, 2], [5, 4, 2, 0]])

path, length = ant_colony_optimization(matrix)

print("Best path:", path)

print("Shortest distance:", length)

You might also like