MLT_11
MLT_11
Algorithm
Model Overview
Single-Layer Perceptron (SLP) Overview
The Single-Layer Perceptron (SLP) is a basic neural network model used for binary
classification. It consists of a single neuron with an activation function that determines
the output. The model is trained using gradient descent to minimize the classification
error. The perceptron adjusts its weights iteratively based on misclassified points until
convergence.
Steps Involved
1. Data Loading:
Two synthetic datasets are generated - one that is linearly separable and another
that is non-linearly separable.
2. Data Exploration:
The datasets are visualized using scatter plots to observe their structure.
3. Data Preprocessing:
Feature Scaling: Standardization is applied to ensure efficient learning.
4. Model Training:
An SLP model is implemented from scratch and trained on both datasets using
stochastic gradient descent (SGD). The weight updates follow the rule:
5. Model Evaluation:
The trained model is evaluated using accuracy scores. Visualizations include decision
boundaries.
In [2]: np.random.seed(42)
X_linear = np.random.randn(200, 2)
y_linear = (X_linear[:, 0] + X_linear[:, 1] > 0).astype(int)
Visualisation
plt.show()
if nonlinear:
X_grid = np.c_[xx.ravel(), yy.ravel(), xx.ravel()**2, yy.ravel()**2]
else:
X_grid = np.c_[xx.ravel(), yy.ravel()]
Z = model.predict(X_grid)
Z = Z.reshape(xx.shape)
plt.contour(xx, yy, Z, levels=[0.5], colors='black')
plt.show()
Visualisation
plt.show()
plt.show()