ML_Unit-4
ML_Unit-4
Unit 4 Neural Networks • ANN is a learning mechanism that models a human brain to
solve any non linear complex problems.
1
04-11-2023
2
04-11-2023
A neuron is connected to other neurons through about 10,000 A neuron receives input from other neurons. Inputs are combined.
synapses
Once input exceeds a critical level, the neuron discharges a spike ‐ The axon endings almost touch the dendrites or cell body of the
an electrical pulse that travels from the body, down the axon, to next neuron.
the next neuron(s)
3
04-11-2023
Transmission of an electrical signal from one neuron to the next is Neurotransmitters are chemicals which are released from the
effected by neurotransmitters. first neuron and which bind to the
Second.
4
04-11-2023
5
04-11-2023
Output y
Output y
xm ............ x2 x1
Input
wm ..... w2 w1
weights
Processing ∑
Transfer Function
f(vk)
(Activation Function)
Output y
6
04-11-2023
Perceptron Perceptron
• The first neural network perceptron was • The perceptron model consists of four steps.
designed by Frank Rosenblatt in 1958. • 1.Inputs from other neurons
• Peceptron is a linear binary classifier used for
supervised learning. • 2.Weights and Bias
• Mc-Culloch –Pitts model of an atrificial • 3.Net sum
neuron and Hebbian learning rule of adjusting • 4.Activation function
weights are combined here.
• The variable weight values and bias also
introduced in this model.
7
04-11-2023
Perceptron
• The modified neuron model receives a set of inputs
x1,x2,x3….xn and their associated weights
w1,w2,w3,…wn and a bias.
• The summation function Net-sum computes the
weighted sum of the inputs received by the neuron and
also adds bias value.
• The output is calculated as f(x)=Activation
function(Net-sum+bias).
• The activation function is a binary step function which
outputs a value 1 if f(x) is above threshold value and
a zero if f(x) is below the threshold value
8
04-11-2023
Delta learning rule and gradient • The training error of a hypothesis is half the
squared difference between the desired target
descent output and actual output .
• Generally learning in neural networks is • Training error= (Odesired-Oestimated)
dT
performed by adjusting the network weights in
order to minimize the difference between the • Where T is the training dataset ,d-training instance.
desired and estimated outputs. • The principle of gradient decendent is an
• This delta difference is measured as an error optimization approach which is used to minimze the
function or also called as cost function. cost fuction by converging to a localminima point
• The cost function is linear and continuous and moving in the negative direction of the gradient and
also differentiable. each step during movement is determined by the
learning rate and the slope of the gradient.
• This way of learning is called as delta rule. It a
type of back propagation applied for training the • Gradient decent is the foundation of back
network. propagation algorithm used in MLP.
9
04-11-2023
10
04-11-2023
11
04-11-2023
Radial Basis function Neural Network Radial Basis function Neural Network
• RBF was introduced by Broomhead and Lowe • Radial basis functions are
in 1988. • Gaussian RBF
• It is a type of MLP with one input layer, one
output layer and with strictly one hidden layer.
• The hidden layer uses a Non linear activation • Multiquadratic RBF
function called Radial basis function. • The RBFNN networks are useful for function
• This function converts the input parameters approximation, interpolation, time series
into high dimensional space and supports prediction, classification and system control.
network for linear separation.
12
04-11-2023
13
04-11-2023
14
04-11-2023
15
04-11-2023
SOFM operation
• The output unit which is close to the input
sample by similarity is chosen as the winning unit
and its connection weights are adjusted by Applications of Neural
learning factor.
• Thus the best matching output unit whose Networks
Applications of Neural
weights are adjusted and are moved close to the
input sample and a topographical map I formed. Networks
• This process is repeated until the map does not
change.
• During the test phase ,the test samples are just
classified.
16
04-11-2023
In Healthcare :
In Defence :
● Disease Recognition.
● CAS (Computer Aided Surgery). ● UAV (Unmanned Areal Vehicle);
● Automated Target Recognition.
Neural networks algorithms can improve the ● Autonomous soldier robots.
performance of dermatologists, cardiologists,
ophthalmologists, and even psychotherapists by Neural networks are used in logistics, armed attack
tracking the development of depression. analysis, and for object location. They are also used
in air patrols, maritime patrol, and for controlling
automated drones.
17
04-11-2023
The neural networks in business may be used to Self-driving cars use cameras, radar and lidar
assist marketers to make predictions about the sensors to gather data about the environment around
campaign's results by recognizing patterns from past the vehicle. This data is then processed by the
marketing efforts. An example of this is the Convolutional neural networks (CNN) to identify
personalization of product recommendations on objects such as other vehicles, pedestrians, road
eCommerce sites like Amazon. signs and traffic lights.
18