0% found this document useful (0 votes)
11 views

22SCSE1180094_Shyam Lab File (SCA)!!!!

The document outlines practical implementations of various logical functions using machine learning models, specifically perceptrons, ADALINE, MADALINE, and back-propagation networks. It includes code examples for AND, NOR, and XOR gate implementations, as well as operations on fuzzy sets and matrix compositions. The document serves as a lab file for a soft computing course, detailing the training processes and expected outputs for each model.

Uploaded by

Shivam Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

22SCSE1180094_Shyam Lab File (SCA)!!!!

The document outlines practical implementations of various logical functions using machine learning models, specifically perceptrons, ADALINE, MADALINE, and back-propagation networks. It includes code examples for AND, NOR, and XOR gate implementations, as well as operations on fuzzy sets and matrix compositions. The document serves as a lab file for a soft computing course, detailing the training processes and expected outputs for each model.

Uploaded by

Shivam Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

School of computer science and engineering

LAB FILE
SUBJECT- soft computing and applications

SUBMITTED BY : SUBMITTED TO :

NAME- Shyam Kumar Yadav Dr. Harshvardhan Choudhary

ADM NO.- 22SCSE1180094

Date : 12 / 01 / 2025
Practical:-1. To Implement AND function using Perceptron

A perceptron is a simple machine learning model that mimics the working of a


biological neuron. It is a linear classifier that operates on a set of inputs and
produces a binary output based on weights and biases.

The AND function is a logical operation where the output is 1 only if both inputs
are 1; otherwise, the output is 0.
w1, w2, b = 0.5, 0.5, -1

def activate(x): return 1

if x >= 0 else 0

def train_perceptron(inputs, desired_outputs, learning_rate, epochs):

global w1, w2, b for epoch in

range(epochs): total_error = 0 for i

in range(len(inputs)): A, B = inputs[i]

target_output = desired_outputs[i]

output = activate(w1 * A + w2 * B + b)

error = target_output - output w1 +=

learning_rate * error * A w2 +=

learning_rate * error * B

b += learning_rate * error

total_error += abs(error) if total_error

== 0:

break
inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]

desired_outputs = [0, 0, 0, 1]

learning_rate = 0.1 epochs = 100

train_perceptron(inputs, desired_outputs, learning_rate, epochs)

for i in range(len(inputs)): A, B = inputs[i]

output = activate(w1 * A + w2 * B + b)

print(f"Input: ({A}, {B}) Output: {output}")

Input: (0, 0), Output: 0

Input: (0, 1), Output: 0

Input: (1, 0), Output: 0

Input: (1, 1), Output: 1


2.NOR Gate implementation with binary input and bipolar target using
adaline.
ADALINE uses a linear activation function during training and updates its weights based
on the difference between the predicted and actual outputs using the Least Mean Squares
(LMS) rule.

NOR Gate Logic

• A NOR Gate is a universal logic gate that outputs 1 only when all inputs are 0.

import numpy as np
# Define a simple linearly separable dataset (X: inputs, Y: labels)

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

Y = np.array([0, 0, 0, 1]) # AND logic gate

# Initialize weights and bias

weights =

np.zeros(X.shape[1]) bias = 0

learning_rate = 1 epochs =

100

# Fixed Increment Learning Algorithm for Perceptron

for epoch in range(epochs): weight_updates = 0

for i in range(len(X)):

# Calculate the perceptron output

linear_output = np.dot(X[i], weights) + bias

predicted = 1 if linear_output >= 0 else 0

# Update weights if the prediction is incorrect

if predicted != Y[i]:

update = learning_rate * (Y[i] - predicted)

weights += update * X[i]

bias += update
weight_updates += 1

# Stop if no weights are updated in this epoch (convergence)

if weight_updates == 0:

break

# Output the final weights, bias, and number of epochs until convergence

print(f"Final Weights: {weights}") print(f"Final Bias: {bias}") print(f"Epochs

until convergence: {epoch + 1}")

Input: [0 0], Output: 1

Input: [0 1], Output: -1

Input: [1 0], Output: -1

Input: [1 1], Output: -1


3.XOR Gate implementation with bipolar input and bipolar target using
madaline.

MADALINE (Multiple Adaptive Linear Neurons) is a type of neural


network designed to solve classification problems, even when they are
non-linearly separable, such as the XOR problem. Introduced by
Bernard Widrow, MADALINE extends the concept of ADALINE by
having multiple layers of neurons and applying the majority rule or
other decision strategies to determine the final output.

XOR Gate Logic

The XOR (Exclusive OR) Gate outputs 1 only when the inputs are
different.
import numpy as np

def bipolar_activation(net):

return np.where(net >= 0, 1, -1)

def madaline_train(X, T, weights, learning_rate, max_epochs):

for epoch in range(max_epochs): for i in

range(X.shape[0]): net = np.dot(weights, X[i])

output = bipolar_activation(net) if not

np.array_equal(output, T[i]):

weights += learning_rate * (T[i] - output).reshape(-1, 1) * X[i]

return weights

X = np.array([[-1, -1, 1], [-1, 1, 1], [1, -1, 1], [1, 1, 1]])

T = np.array([[-1], [1], [1], [-1]]) weights =

np.random.uniform(-1, 1, (1, X.shape[1]))

learning_rate = 0.1
max_epochs = 1000 final_weights = madaline_train(X, T, weights,

learning_rate, max_epochs) for i in range(X.shape[0]):

net = np.dot(final_weights, X[i]) output =

bipolar_activation(net) print(f"Input: {X[i][:2]}, Output:

{output[0]}, Target: {T[i][0]}")

1. Input: [-1 -1], Output: 1, Target: -1

2. Input: [-1 1], Output: -1, Target: 1

3. Input: [ 1 -1], Output: -1, Target: 1

4. Input: [1 1], Output: -1, Target: -1


4.Create a perceptron with appropriate no. of inputs and outputs. Train it
using fixed increment learning algorithm until no change in weights is
required. Output the final weights.
A perceptron is a type of single-layer artificial neural network used for
binary classification. It learns by adjusting its weights based on errors
using a Fixed Increment Learning Algorithm (or perceptron learning
rule).

Import numpy as np

# Define a simple linearly separable dataset (X: inputs, Y: labels)

X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])

Y = np.array([0, 0, 0, 1]) # AND logic gate

# Initialize weights and bias

weights =

np.zeros(X.shape[1]) bias = 0

learning_rate = 1 epochs =

100

# Fixed Increment Learning Algorithm for Perceptron for

epoch in range(epochs):

weight_updates = 0

for I in range(len(X)):
# Calculate the perceptron output

linear_output = np.dot(X[i], weights) + bias

predicted = 1 if linear_output >= 0 else 0

# Update weights if the prediction is incorrect

if predicted != Y[i]:

update = learning_rate * (Y[i] – predicted)

weights += update * X[i]

bias += update

weight_updates += 1

# Stop if no weights are updated in this epoch (convergence)

if weight_updates == 0:

break

# Output the final weights, bias, and number of epochs until convergence

print(f”Final Weights: {weights}”)

print(f”Final Bias: {bias}”) print(f”Epochs until

convergence: {epoch + 1}”)


5.Using back-propagation network, find the new weights. It is presented
with the input pattern [0, 1] and the target output is

1. Use a learning rate α = 0.25 and binary sigmoidal activation function.

Back-propagation is a supervised learning algorithm used to train multi-layer


neural networks. It works by minimizing the error between predicted and target
outputs using gradient descent.

Key Concepts:

1. Forward Propagation:

2. Input values are passed through the network layer by layer.

O Eachneuron computes its weighted sum and applies an activation


function to produce an output.

3. Binary Sigmoidal Activation Function:

4. f(x) = 1 / 1+e^−x
import numpy as np class NeuralNetwork: def

_init_(self, input_size, hidden_size, output_size):

self.input_size = input_size

self.hidden_size = hidden_size

self.output_size = output_size

# Initialize weights

self.weights_input_hidden = np.random.randn(self.input_size, self.hidden_size)

self.weights_hidden_output = np.random.randn(self.hidden_size, self.output_size)

# Initialize the biases

self.bias_hidden = np.zeros((1, self.hidden_size))

self.bias_output = np.zeros((1, self.output_size))

def sigmoid(self, x):

return 1 / (1 + np.exp(-x))

def sigmoid_derivative(self, x):


return x * (1 - x)

def feedforward(self, X):

# Input to hidden

self.hidden_activation = np.dot(X, self.weights_input_hidden) + self.bias_hidden

self.hidden_output = self.sigmoid(self.hidden_activation)

# Hidden to output self.output_activation = np.dot(self.hidden_output,

self.weights_hidden_output) +

self.bias_output

self.predicted_output = self.sigmoid(self.output_activation)

return self.predicted_output

def backward(self, X, y, learning_rate):

# Compute the output layer error

output_error = y - self.predicted_output

output_delta = output_error * self.sigmoid_derivative(self.predicted_output)

# Compute the hidden layer error

hidden_error = np.dot(output_delta, self.weights_hidden_output.T)

hidden_delta = hidden_error * self.sigmoid_derivative(self.hidden_output)

# Update weights and biases self.weights_hidden_output +=

np.dot(self.hidden_output.T, output_delta) *

learning_rate
self.bias_output += np.sum(output_delta, axis=0, keepdims=True) * learning_rate
self.weights_input_hidden += np.dot(X.T, hidden_delta) * learning_rate

self.bias_hidden += np.sum(hidden_delta, axis=0, keepdims=True) * learning_rate

def train(self, X, y, epochs, learning_rate):

for epoch in range(epochs): output =

self.feedforward(X)

self.backward(X, y, learning_rate)

if epoch % 4000 == 0:

loss = np.mean(np.square(y - output))

print(f"Epoch {epoch}, Loss:{loss}") X =

np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) y = np.array([[0], [1], [1], [0]])

nn = NeuralNetwork(input_size=2, hidden_size=4,

output_size=1) nn.train(X, y, epochs=10000, learning_rate=0.1)

# Test the trained model output =

nn.feedforward(X) print("Predictions

after training:") print(output)

Updated Weights: [0.10668578 0.51644559]

Updated Bias: 0.9946749278016108 Output: 0.8171962820719835


6.Program for to perform Union, Intersection and Complement
operations in Fuzzy Sets.
Fuzzy sets are an extension of classical sets where elements have degrees of
membership in the range [0,1][0, 1][0,1]. This flexibility allows handling of
uncertainties and imprecision. Common operations on fuzzy sets include:

1. Union:

o Combines two fuzzy sets. o Membership function:

μA B(x)=max(μA(x),μB(x))

2. Intersection:

o Finds the common elements of two fuzzy sets. o

Membership function: μA∩B(x)=min(μA(x),μB(x))

3. Complement:

o Represents the negation of a fuzzy set. o

Membership function:

μComplement of A(x)=1−μA(x)

import numpy as np

def fuzzy_union(A, B):

return np.maximum(A, B)

def fuzzy_intersection(A, B):

return np.minimum(A, B)
def fuzzy_complement(A):

return 1 - A

A = np.array([0.2, 0.5, 0.7, 0.9]) B

= np.array([0.1, 0.6, 0.4, 0.8])

union = fuzzy_union(A, B)

intersection = fuzzy_intersection(A, B)

complement_A = fuzzy_complement(A)

complement_B = fuzzy_complement(B)

print(f"Fuzzy Set A: {A}")

print(f"Fuzzy Set B: {B}")

print(f"Union: {union}")

print(f"Intersection: {intersection}")

print(f"Complement of A: {complement_A}")

print(f"Complement of B: {complement_B}")

1. Fuzzy Set A: [0.2 0.5 0.7 0.9]

2. Fuzzy Set B: [0.1 0.6 0.4 0.8]

3. Union: [0.2 0.6 0.7 0.9] Intersection: [0.1 0.5 0.4 0.8]

Complement of B: [0.9 0.4 0.6 0.2] Complement of A: [0.8 0.5 0.3 0.1]
PRACTICAL 10 : Create two matrices of the dimensions
3x3 and 3x4, respectively, which contain random
numbers as their elements.
Compute The composition of this two fuzzy relation
using both
max-min and max-product composition
import numpy as np

A = np.random.rand(3, 3)

B = np.random.rand(3, 4) def max_min_composition(A,


B):

result = np.zeros((A.shape[0], B.shape[1]))

for i in range(A.shape[0]):

for j in range(B.shape[1]):
result[i, j] = np.max(np.minimum(A[i, :], B[:, j]))
return result def
max_product_composition(A, B):

result = np.zeros((A.shape[0], B.shape[1]))

for i in range(A.shape[0]):
for j in range(B.shape[1]):
result[i, j] = np.max(A[i, :] * B[:, j])
return result
max_min_result = max_min_composition(A, B) max_product_result
= max_product_composition(A, B) print("Matrix A (3x3):")

print(A) print("Matrix B
(3x4):") print(B)

print("Max-Min Composition:") print(max_min_result)


print("Max-Product Composition:") print(max_product_result)
Matrix A (3x3): [[0.74587118 0.85836126 0.16810991]
[0.85850814
0.04625704 0.3937905 ] [0.48409871

0.1296514 0.42916574]] Matrix B

(3x4):

[[0.8868594 0.26330121 0.09024186 0.66390533]

[0.66543901 0.73539113 0.78105965 0.46743334]

[0.13985728 0.99920588 0.44160764 0.21660343]] MaxMin

Composition:

[[0.74587118 0.73539113 0.78105965 0.66390533]

[0.85850814 0.3937905 0.3937905 0.66390533] [0.48409871

0.42916574 0.42916574 0.48409871]] MaxProduct

Composition:

[[0.66148287 0.63123126 0.67043135 0.49518785]

[0.76137602 0.39347779 0.1739009 0.56996813]

[0.4293275 0.42882493 0.18952287 0.32139572]]


9. Write a program that creates two random fuzzy sets of the dimensions say n and m
(to be defined by the user) Complete the fuzzy relation indexed by Cartesian product of
the sets.

Fuzzy sets extend classical sets by assigning a membership degree (between 0 and 1) to each
element. When dealing with multiple fuzzy sets, their Cartesian product forms a fuzzy relation that
defines the relationship between elements of the two sets.

import random

def generate_fuzzy_set(size):

return {i: random.uniform(0, 1) for i in range(size)} def

cartesian_product(set1, set2):

return {(i, j): set1[i] * set2[j] for i in set1 for j in set2} def

main():

n = int(input("Enter the size of the first fuzzy set: "))

m = int(input("Enter the size of the second fuzzy set: "))

set1 = generate_fuzzy_set(n) set2 =

generate_fuzzy_set(m) fuzzy_relation =

cartesian_product(set1, set2) print("Fuzzy Relation: ")

for key, value in fuzzy_relation.items():

print(f"{key}: {value}")

if __name__ == "__main__":

main()
8. Write a program in to implement De-Morgan’s Law.

De Morgan's Laws provide a way to relate the complement of the


union and intersection of two sets. In the context of fuzzy logic and
fuzzy sets, these laws help us handle operations like union,
intersection, and complement efficiently.

De Morgan’s Laws:

For fuzzy sets AAA and BBB, the laws are:

1. Complement of the Union:

A B=A∩B

This means the complement of the union of two fuzzy sets is equal to
the intersection of their complements.

2. Complement of the Intersection:

A∩B=A B
This means the complement of the intersection of two fuzzy sets is
equal to the union of their complements.
import numpy as np

def fuzzy_complement(A):

return 1 - A

def fuzzy_union(A, B):

return np.maximum(A, B)

def fuzzy_intersection(A, B):

return np.minimum(A, B)

A = np.array([0.2, 0.5, 0.7, 0.9]) B

= np.array([0.1, 0.6, 0.4, 0.8])

left_hand_side_1 = fuzzy_complement(fuzzy_intersection(A, B))

right_hand_side_1 = fuzzy_union(fuzzy_complement(A), fuzzy_complement(B))

left_hand_side_2 = fuzzy_complement(fuzzy_union(A, B))

right_hand_side_2 = fuzzy_intersection(fuzzy_complement(A), fuzzy_complement(B))


print(f"Set A: {A}")

print(f"Set B: {B}")

print(f"De-Morgan's Law 1: Complement of Intersection = Union of Complements")

print(f"LHS: {left_hand_side_1}")

print(f"RHS: {right_hand_side_1}")

print(f"De-Morgan's Law 2: Complement of Union = Intersection of Complements")

print(f"LHS: {left_hand_side_2}")

print(f"RHS: {right_hand_side_2}")

1. Fuzzy Set A: [0.2 0.5 0.7 0.9]

2. Fuzzy Set B: [0.1 0.6 0.4 0.8]

3. Left Side (Complement of Intersection): [0.9 0.5 0.6 0.2]

4. Right Side (Union of Complements): [0.9 0.5 0.6 0.2]

5. De-Morgan's Law Verified: True


7. Generate ANDNOT function using McCulloch-Pitts neural net.

The McCulloch-Pitts (MP) neuron is one of the earliest models of a neural


network, used for binary classification. It operates by applying a threshold function
to the weighted sum of inputs, and it outputs a binary result (0 or 1). The
ANDNOT function is a logical operation that behaves as the AND operation
combined with a negation. Specifically, the output of an ANDNOT gate is the
negation of the AND of two binary inputs.

ANDNOT Logic:

• The ANDNOT function is defined as:

ANDNOT(A,B)=A B

import numpy as np def mp_neuron(inputs,

weights, threshold): weighted_sum =

np.dot(inputs, weights) output = 1 if

weighted_sum >= threshold else 0 return


output def and_not(x1, x2): weights = [1, -1]

threshold = 1 inputs = np.array([x1, x2])

output = mp_neuron(inputs, weights, threshold)

return output print(and_not(0, 0))

print(and_not(1, 0)) print(and_not(0, 1))

print(and_not(1, 1))
11. Study of a research paper based on fuzzy logic. Choose any one of the following (a)
Integration of Fuzzy and Deep Learning in Three-Way Decisions.
(b) Aspect based Fuzzy Logic Sentiment Analysis on Social Media Big Data.
(c) Short-Term Forecasting of Convective Weather Affecting Civil Aviation Operations Using
Deep Learning.

1 Integration of Fuzzy and Deep Learning in Three-Way Decisions:


This paper would explore the integration of fuzzy logic and deep learning techniques in making
decisions where uncertainty or incomplete information exists, often involving three potential
outcomes: true, false, or indeterminate. The fusion of fuzzy logic (which deals with reasoning
under uncertainty) with deep learning models could offer insights into enhancing
decisionmaking systems.
2 Aspect-based Fuzzy Logic Sentiment Analysis on Social Media Big Data:
This research likely focuses on applying fuzzy logic in sentiment analysis to process vast
amounts of social media data. Aspect-based sentiment analysis evaluates sentiments for
specific features (aspects) of a product or service, and combining it with fuzzy logic can
improve its accuracy in the context of ambiguous or imprecise data.
3 Short-Term Forecasting of Convective Weather Affecting Civil Aviation Operations Using
Deep Learning:
This paper would focus on using deep learning models to forecast weather phenomena,
specifically convective weather, which can impact civil aviation. The inclusion of fuzzy logic
might help manage the uncertainty inherent in weather predictions, potentially making the
forecasts more robust and accurate for aviation operations.

You might also like