0% found this document useful (0 votes)
2 views

MLT_11

The document outlines an exercise on implementing a Single Layer Perceptron (SLP) for both linearly and non-linearly separable data. It details the algorithm, model overview, and step-by-step process including data loading, preprocessing, model training, and evaluation. The SLP is trained using stochastic gradient descent, achieving perfect accuracy on both datasets.

Uploaded by

srohithkanna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

MLT_11

The document outlines an exercise on implementing a Single Layer Perceptron (SLP) for both linearly and non-linearly separable data. It details the algorithm, model overview, and step-by-step process including data loading, preprocessing, model training, and evaluation. The SLP is trained using stochastic gradient descent, achieving perfect accuracy on both datasets.

Uploaded by

srohithkanna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Exercise Number 11 Date: 12-03-2025

Single Layer Perceptron


Aim:
To perform Single Layer Perceptron on Linearly and Non-Linearly Separable Data

Algorithm
Model Overview
Single-Layer Perceptron (SLP) Overview

The Single-Layer Perceptron (SLP) is a basic neural network model used for binary
classification. It consists of a single neuron with an activation function that determines
the output. The model is trained using gradient descent to minimize the classification
error. The perceptron adjusts its weights iteratively based on misclassified points until
convergence.

Steps Involved
1. Data Loading:
Two synthetic datasets are generated - one that is linearly separable and another
that is non-linearly separable.

2. Data Exploration:
The datasets are visualized using scatter plots to observe their structure.

3. Data Preprocessing:
Feature Scaling: Standardization is applied to ensure efficient learning.
4. Model Training:
An SLP model is implemented from scratch and trained on both datasets using
stochastic gradient descent (SGD). The weight updates follow the rule:

W_new = W_old + learning_rate * (y_true - y_pred) * X

5. Model Evaluation:
The trained model is evaluated using accuracy scores. Visualizations include decision
boundaries.

Code and Output

Linearly Separable Data


Import necessary libraries

In [1]: import numpy as np


import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score

Load the dataset

In [2]: np.random.seed(42)
X_linear = np.random.randn(200, 2)
y_linear = (X_linear[:, 0] + X_linear[:, 1] > 0).astype(int)

Split the Data

In [3]: X_train_lin, X_test_lin, y_train_lin, y_test_lin = train_test_split(X_linear, y_

Preprocess the data

In [4]: scaler = StandardScaler()


X_train_lin = scaler.fit_transform(X_train_lin)
X_test_lin = scaler.transform(X_test_lin)

Single Layer Perceptron

In [5]: class SingleLayerPerceptron:


def __init__(self, input_dim, learning_rate=0.01, epochs=1000):
self.weights = np.random.randn(input_dim + 1) # Include bias
self.learning_rate = learning_rate
self.epochs = epochs

def activation(self, x):


return 1 if x >= 0 else 0 # Step function

def predict(self, X):


X_bias = np.c_[np.ones((X.shape[0], 1)), X] # Add bias term
return np.array([self.activation(np.dot(self.weights, x)) for x in X_bia

def train(self, X, y):


X_bias = np.c_[np.ones((X.shape[0], 1)), X] # Add bias term
for _ in range(self.epochs):
for i in range(X.shape[0]):
prediction = self.activation(np.dot(self.weights, X_bias[i]))
error = y[i] - prediction
self.weights += self.learning_rate * error * X_bias[i]

Training the model

In [6]: perceptron_linear = SingleLayerPerceptron(input_dim=2)


perceptron_linear.train(X_linear, y_linear)

Evaluating the model


In [7]: y_pred_lin = perceptron_linear.predict(X_linear)
acc_lin = np.mean(y_pred_lin == y_linear)
print(f"Accuracy on Linearly Separable Data: {acc_lin:.4f}")

Accuracy on Linearly Separable Data: 1.0000

Visualisation

In [8]: def plot_data_og(X, y, title, model=None):


plt.scatter(X[:, 0], X[:, 1], c=y, cmap='coolwarm', edgecolors='k')
plt.title(title)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")

if model is not None:


xx, yy = np.meshgrid(np.linspace(X[:, 0].min(), X[:, 0].max(), 100),
np.linspace(X[:, 1].min(), X[:, 1].max(), 100))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0.5], colors='black')

plt.show()

In [9]: plot_data_og(X_linear, y_linear, "Linearly Separable Data")

In [10]: def plot_data(X, y, title, model, nonlinear=False):


plt.scatter(X[:, 0], X[:, 1], c=y, cmap='coolwarm', edgecolors='k')
plt.title(title)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")
xx, yy = np.meshgrid(np.linspace(X[:, 0].min(), X[:, 0].max(), 100),
np.linspace(X[:, 1].min(), X[:, 1].max(), 100))

if nonlinear:
X_grid = np.c_[xx.ravel(), yy.ravel(), xx.ravel()**2, yy.ravel()**2]
else:
X_grid = np.c_[xx.ravel(), yy.ravel()]

Z = model.predict(X_grid)
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0.5], colors='black')

plt.show()

In [11]: plot_data(X_linear, y_linear, "Linearly Separable Data", perceptron_linear)

Non-Linearly Separable Data

Import necessary libraries

In [12]: import numpy as np


import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.neural_network import MLPClassifier
from sklearn.metrics import accuracy_score

Load the dataset


In [13]: X_nonlinear = np.random.randn(200, 2)
y_nonlinear = (X_nonlinear[:, 0]**2 + X_nonlinear[:, 1]**2 > 1).astype(int)
X_nonlinear = np.c_[X_nonlinear, X_nonlinear[:, 0]**2, X_nonlinear[:, 1]**2]

Single Layer Perceptron

In [14]: class SingleLayerPerceptron:


def __init__(self, input_dim, learning_rate=0.01, epochs=1000, activation='s
self.weights = np.random.randn(input_dim + 1)
self.learning_rate = learning_rate
self.epochs = epochs
self.activation_function = activation

def activation(self, x):


if self.activation_function == 'step':
return 1 if x >= 0 else 0
elif self.activation_function == 'sigmoid':
return 1 / (1 + np.exp(-x)) > 0.5

def predict(self, X):


X_bias = np.c_[np.ones((X.shape[0], 1)), X]
return np.array([self.activation(np.dot(self.weights, x)) for x in X_bia

def train(self, X, y):


X_bias = np.c_[np.ones((X.shape[0], 1)), X]
for _ in range(self.epochs):
for i in range(X.shape[0]):
prediction = self.activation(np.dot(self.weights, X_bias[i]))
error = y[i] - prediction
self.weights += self.learning_rate * error * X_bias[i]

Training the model

In [15]: perceptron_nonlinear = SingleLayerPerceptron(input_dim=4, activation='sigmoid')


perceptron_nonlinear.train(X_nonlinear, y_nonlinear)

Evaluating the model

In [16]: y_pred_nonlin = perceptron_nonlinear.predict(X_nonlinear)


acc_nonlin = np.mean(y_pred_nonlin == y_nonlinear)
print(f"Accuracy on Non-Linearly Separable Data: {acc_nonlin:.4f}")

Accuracy on Non-Linearly Separable Data: 1.0000

Visualisation

In [17]: def plot_data_og(X, y, title, model=None):


plt.scatter(X[:, 0], X[:, 1], c=y, cmap='coolwarm', edgecolors='k')
plt.title(title)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")

if model is not None:


xx, yy = np.meshgrid(np.linspace(X[:, 0].min(), X[:, 0].max(), 100),
np.linspace(X[:, 1].min(), X[:, 1].max(), 100))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0.5], colors='black')

plt.show()

In [18]: plot_data_og(X_nonlinear, y_nonlinear, "Linearly Separable Data")

In [19]: def plot_data(X, y, title, model):


plt.scatter(X[:, 0], X[:, 1], c=y, cmap='coolwarm', edgecolors='k')
plt.title(title)
plt.xlabel("Feature 1")
plt.ylabel("Feature 2")

xx, yy = np.meshgrid(np.linspace(X[:, 0].min(), X[:, 0].max(), 100),


np.linspace(X[:, 1].min(), X[:, 1].max(), 100))
Z = model.predict(np.c_[xx.ravel(), yy.ravel(), xx.ravel()**2, yy.ravel()**2
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0.5], colors='black')

plt.show()

In [20]: plot_data(X_nonlinear, y_nonlinear, "Non-Linearly Separable Data", perceptron_no


Result:
The outputs were verified successfully

You might also like