0% found this document useful (0 votes)
0 views

PY_2_Define

Uploaded by

Gangesh Sawarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

PY_2_Define

Uploaded by

Gangesh Sawarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

PY 2 Define an Artificial Neural Network (ANN)**

Definition:
An Artificial Neural Network (ANN) is a computational model that mimics the
structure and behavior of the human brain. It is composed of interconnected units
called neurons, organized into layers. These networks can learn from data,
identify patterns, and make decisions.

Key Components of ANN:


1. Neurons (Nodes): Basic processing units that take input and produce
output.

2. Weights: Each connection has a weight that adjusts as learning proceeds.

3. Bias: A constant value that shifts the activation function.

4. Activation Function: Introduces non-linearity (e.g., sigmoid, ReLU).

5. Layers:

o Input Layer: Takes raw input.


o Hidden Layer(s): Processes information.
o Output Layer: Provides final output.

Working:
Each neuron receives inputs, applies weights and bias, sums the result, applies an
activation function, and passes the output to the next layer.

Example Use Cases:


 Image recognition
 Spam detection
 Language translation
 Medical diagnosis
✅ (b) Differentiate between Supervised and Unsupervised
Learning
Unsupervised
Aspect Supervised Learning Learning
Definition The model is trained The model is trained
on labeled data. on unlabeled data.
Input Input + Only input data
corresponding without labels.
output (labels).
Goal Predict outcomes or Discover hidden
classify data. patterns or structure.
Examples Classification, Clustering,
Regression Dimensionality
Reduction
Algorithms Decision Trees, SVM, K-means, PCA, Self-
ANN, KNN Organizing Maps
Use Case Email Spam Customer
Detection, Stock Segmentation,
Price Prediction Market Basket
Analysis

Example:
 Supervised Learning Example: Input: Image of a cat → Label: “Cat” Model
learns from input-output pairs to predict future labels.

 Unsupervised Learning Example: Input: A dataset of customer purchases


(no labels) Model clusters similar customer behaviors without predefined
categories.
✅ (c) Discuss the Theory, Architecture, and Training
Algorithm of Kohonen Neural Network
🔹 Theory:
The Kohonen Neural Network is a type of Self-Organizing Map (SOM),
developed by Teuvo Kohonen. It is an unsupervised learning model used for
clustering and visualization of high-dimensional data by mapping it to a lower-
dimensional space (usually 2D).

🔹 Key Idea:
It finds patterns in data and organizes similar input patterns into nearby locations
on the map (a grid of neurons).

🔹 Architecture:
1. Input Layer:

o Has n nodes (equal to number of input features).


o Connected to all nodes in the output layer.
2. Output Layer (Map Grid):

o Usually a 2D grid (e.g., 5x5 or 10x10).


o Each node is called a neuron.
o Each neuron has a weight vector of the same dimension as input.

🔹 Working / Training Algorithm:


1. Initialization:

o Randomly initialize weights of neurons in the output layer.


2. Input Vector:

o Present a training sample (input vector) to the network.


3. Distance Calculation:

o Calculate distance (usually Euclidean) between the input vector and all
neurons’ weight vectors.
4. Best Matching Unit (BMU):

o Select the neuron whose weights are closest to the input → called
BMU.
5. Weight Update:

o Update the weights of the BMU and its neighboring neurons to move
closer to the input.

o Update rule:

w (t+ 1)=w(t )+ η(t)⋅( x(t)− w (t))

where:

 w (t) = current weight


 x (t) = input vector
 η(t) = learning rate
6. Neighborhood Function:

o Neurons closer to the BMU are updated more than those further
away.
7. Repeat:

o Repeat for all input vectors over several iterations (epochs).

🔹 Applications of Kohonen Network:


 Data visualization
 Customer segmentation
 Speech recognition
 Intrusion detection systems
 Bioinformatics (e.g., gene expression clustering)

🔹 Example:
Suppose you have a dataset of customers with features like age, income, and
purchase behavior. A Kohonen SOM can group similar customers on a 2D map,
visually showing clusters of similar profiles.

Let's solve this perceptron learning problem step by step.


📌 Given:
X₁ X₂ Y (Target Output)
1 1 1
1 -1 -1
-1 1 -1
-1 -1 -1

📌 Initial conditions:
 Initial weights: w 1=0 , w 2=0 , θ=0
 Learning rate η=1

We’ll update weights using the Perceptron Learning Rule:


ne w old
wi =w i +η ⋅(T −O)⋅ X i
ne w old
θ =θ +η ⋅(T −O)⋅(−1)

Where:

 T = Target output
 O = Actual output using activation function

O= {−1
+1 if n e t ≥ 0
if n e t <0

n e t=w1 x1 + w2 x 2 −θ

🔁 Epoch 1:
Pattern 1: (1, 1) → Target = 1
 Net = 0∗1+0∗1− 0=0 → Output = +1 ✅ (No change since output matches
target)
Pattern 2: (1, -1) → Target = -1
 Net = 0∗1+0∗(− 1)−0=0 → Output = +1 ❌
 Error = -1 - (+1) = -2
 Update: w 1=0+1∗(− 2)∗1=− 2 w 2=0+1∗(− 2)∗(− 1)=2 θ=0+1∗(−2)∗(−1)=2
Pattern 3: (-1, 1) → Target = -1
 Net = −2∗(− 1)+2∗1 −2=2+2 −2=2 → Output = +1 ❌
 Error = -1 - (+1) = -2
 Update: w 1=−2+1∗(− 2)∗(− 1)=0 w 2=2+1∗(−2)∗1=0 θ=2+1∗(−2)∗(−1)=4
Pattern 4: (-1, -1) → Target = -1
 Net = 0∗(− 1)+0∗(−1)− 4=− 4 → Output = -1 ✅ (No change)

🔁 Epoch 2:
Pattern 1: (1, 1) → Target = 1
 Net = 0+ 0− 4=− 4 → Output = -1 ❌
 Error = 1 - (-1) = 2
 Update: w 1=0+2∗1=2 w 2=0+2∗1=2 θ=4 +2∗(−1)=2
Pattern 2: (1, -1) → Target = -1
 Net = 2∗1+ 2∗(−1)− 2=2 −2 −2=− 2 → Output = -1 ✅ (No change)
Pattern 3: (-1, 1) → Target = -1
 Net = −2+2 − 2=−2 → Output = -1 ✅ (No change)
Pattern 4: (-1, -1) → Target = -1
 Net = −2+(−2)−2=− 6 → Output = -1 ✅ (No change)

✅ All outputs match target values → Converged!

✅ Final Weights:
 w 1=2
 w 2=2
 θ=2

🧠 Final Decision Function:

O= {
+1 if 2 x1 +2 x 2 ≥ 2
−1 otherwise
Let me know if you want this in a Python implementation or need explanation for
a different part!

You might also like