module-4
module-4
Module-IV
Prof.Ravindra Patil
Dept. of CSE KLS V.D.I.T,Haliyal
• 6.2.4 Regression Trees
• Activation functions:
• Decide whether a neuron should "fire" (be
activated).
• Map input signals to output signals.
• Normalize outputs, typically between:
– 0 and 1, or
– -1 and +1.
• Can be linear or non-linear.
• Types of Activation Functions
• Perceptron and Learning Theory :
• Invented by: Frank Rosenblatt in 1958.
• Type: Linear binary classifier used in supervised
learning.
• Based on:
– McCulloch-Pitts Neuron Model (basic neuron model)
– Hebbian Learning Rule (learning via weight adjustment)
• Key Innovations by Rosenblatt:
• Introduced variable weights
• Added an extra bias input
• Proposed that neurons could learn weights and
thresholds from data using a supervised learning
algorithm
• TYPES OF ARTIFICIAL NEURAL
NETWORKS
• ANNs consist of multiple neurons arranged in
layers.
• There are different types of ANNs that differ by the
network structure, activation function involved and
the learning rules used.
• In an ANN, there are three layers called input
layer, hidden layer and output layer.
• Any general ANN would consist of one input layer,
one output layer and zero or more hidden layers.
• Feed Forward Neural Network (FFNN)
• Flow: One-way (forward).
• Complexity: Simple, no back propagation.
• Use Cases: Simple classification, image processing.
• Variants:
– Single-layered
– Multi-layered (still no feedback)
• Multi-Layer
Perceptron (MLP)
• Structure: Input → Hidden
layer(s) → Output.
• Connections: Fully
connected layers.
• Learning: Supports
backpropagation.
• Use Cases: Deep learning
tasks like:
– Speech recognition
– Medical diagnosis
– Forecasting
• Complexity: Higher than
FFNN, slower but more
capable.
• Feedback Neural
Network
• Structure: Has
feedback connections.
• Flow: Bi-directional
(includes loops).
• Dynamic: More
adaptive during
training.
• Use Cases: Tasks
requiring memory or
dynamic context.