0% found this document useful (0 votes)
5 views

ML_Unit-4

Uploaded by

tejakaranam44
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

ML_Unit-4

Uploaded by

tejakaranam44
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

04-11-2023

Artificial Neural Networks


• Artificial neural networks (ANN) imitate human brain
behavior and the way in which learning happens in human.

Unit 4 Neural Networks • ANN is a learning mechanism that models a human brain to
solve any non linear complex problems.

• The human brain constitutes a mass of neurons that are all


Dr.M.Thamarai connected as a network, which is actually a directed graph.
Professor,ECE,SVEC • These neurons are the processing units which receive
information, process it and then transmit this data to other
neurons that allows humans to learn almost any task.

Human nerve system


Artificial Neural Networks
• Human nerve system has billions of neurons that
• Each neuron is modeled as a computing unit . are processing units which make humans to think,
speak, hear and smell.
• ANN system consists of many such computing • Human nerve system makes us to remember,
units operating in parallel that can learn from recognize and correlate things around us.
observations. • It is a learning system that consists of functional
• ANN applications in in the field of computer units called nerve cells, typically called neurons.
science are Natural Language processing, • The human neuron system is divided into two
Pattern recognition, face recognition, speech sections called the Central Nerve System(CNS) and
Peripheral Nerve System(PNS).
recognition, text processing, stock prediction
etc., • The brain and spinal cord constitute the CNS.
• The neurons inside and outside the CNS constitute
the PNS.

1
04-11-2023

Human nerve system Biological neuron


• The neurons are basically classified into tree • A typical biological neuron has four parts called
types called sensory neurons, motor neurons and
interneurons. dendrides, soma, axon and synapse. Dendrite
• Sensory neurons get information from different accept information and soma process the
parts of the body and bring it to CNS. information.
• Motor neurons receive information from other • Axon computes the output and synapse is the link
neurons and transmit commands to the body between two neurons.
parts. • A single neuron is connected by axons to around
• The CNS consists of only interneurons which 10,000 neurons and through these axons the
connect one neuron to another neuron by processed information is passed from one neuron
receiving information from one neuron and to other neuron.
transmitting it to another.

Biological neuron How do our brains work?


 A processing element
• A neuron gets fired if the input information
crosses the threshold value and transmits signals
another neuron through synapse.
• A synapse gets fired with an electrical impulse
called spikes which are transmitted to another
neuron.
• A single neuron can receive synaptic inputs from
one neuron or multiple neurons.
• These neurons form a network structure which
processes input information and gives output
response. Dendrites: Input
Cell body: Processor
Synaptic: Link
Axon: Output

2
04-11-2023

How do our brains work? How do our brains work?


 A processing element  A processing element

A neuron is connected to other neurons through about 10,000 A neuron receives input from other neurons. Inputs are combined.
synapses

How do our brains work? How do our brains work?


 A processing element  A processing element

Once input exceeds a critical level, the neuron discharges a spike ‐ The axon endings almost touch the dendrites or cell body of the
an electrical pulse that travels from the body, down the axon, to next neuron.
the next neuron(s)

3
04-11-2023

How do our brains work? How do our brains work?


 A processing element  A processing element

Transmission of an electrical signal from one neuron to the next is Neurotransmitters are chemicals which are released from the
effected by neurotransmitters. first neuron and which bind to the
Second.

How do our brains work? How do ANNs work?


 A processing element

This link is called a synapse. The strength of the signal that


reaches the next neuron depends on factors such as the amount of
neurotransmitter available.
An artificial neuron is an imitation of a human neuron

4
04-11-2023

Artificial Neuron Structure of Single Neuron


• Artificial neurons are like biological neurons
which are called nodes.
• A node or a neuron can receive one or more
input information and process it.
• Artificial neurons(nodes) are connected to
another neuron by connection link.
• Each connection link is associated with a
synaptic weight.

Simple model of a neuron Simple model of a neuron


• The first mathematical model of a biological neuron • Neuron receives a set of inputs x1,x2,x3… .xn and
was designed by McCulloch&Pitts in 1943.
their associated weights w1,w2,w3,….wn.
• It includes two steps.
• 1.It receives weighted inputs from other neurons. • The summation function computes the
• 2.It operated with a threshold function or activation weighted sum of the inputs received by
function. neuron.
• The received inputs are computed as a weighted sum
and is given to activation function.
• If the sum exceeds the threshold value the neurons
gets fired.

5
04-11-2023

How do ANNs work? How do ANNs work?


Not all inputs are equal
............
Input xm x2 x1 xm ............
x2 x1
Input
wm ..... w2 w1
Processing ∑ weights
∑= X1+X2 + ….+Xm =y
Processing ∑ ∑= X1w1+X2w2 + ….+Xmwm =y

Output y
Output y

The output is a function of the input, that is


How do ANNs work? affected by the weights, and the transfer
The signal is not passed down to the
next neuron verbatim functions

xm ............ x2 x1
Input
wm ..... w2 w1
weights
Processing ∑
Transfer Function
f(vk)
(Activation Function)

Output y

6
04-11-2023

Three types of layers: Input, Hidden, and


Output Artificial Neural Networks
 An ANN can:
1. compute any computable function, by the appropriate
selection of the network topology and weights values.
2. learn from experience!
 Specifically, by trial‐and‐error

Perceptron Perceptron
• The first neural network perceptron was • The perceptron model consists of four steps.
designed by Frank Rosenblatt in 1958. • 1.Inputs from other neurons
• Peceptron is a linear binary classifier used for
supervised learning. • 2.Weights and Bias
• Mc-Culloch –Pitts model of an atrificial • 3.Net sum
neuron and Hebbian learning rule of adjusting • 4.Activation function
weights are combined here.
• The variable weight values and bias also
introduced in this model.

7
04-11-2023

Perceptron
• The modified neuron model receives a set of inputs
x1,x2,x3….xn and their associated weights
w1,w2,w3,…wn and a bias.
• The summation function Net-sum computes the
weighted sum of the inputs received by the neuron and
also adds bias value.
• The output is calculated as f(x)=Activation
function(Net-sum+bias).
• The activation function is a binary step function which
outputs a value 1 if f(x) is above threshold value  and
a zero if f(x) is below the threshold value

Perceptron algorithm Perceptron algorithm

8
04-11-2023

Delta learning rule and gradient • The training error of a hypothesis is half the
squared difference between the desired target
descent output and actual output .
• Generally learning in neural networks is • Training error= (Odesired-Oestimated)
dT
performed by adjusting the network weights in
order to minimize the difference between the • Where T is the training dataset ,d-training instance.
desired and estimated outputs. • The principle of gradient decendent is an
• This delta difference is measured as an error optimization approach which is used to minimze the
function or also called as cost function. cost fuction by converging to a localminima point
• The cost function is linear and continuous and moving in the negative direction of the gradient and
also differentiable. each step during movement is determined by the
learning rate and the slope of the gradient.
• This way of learning is called as delta rule. It a
type of back propagation applied for training the • Gradient decent is the foundation of back
network. propagation algorithm used in MLP.

Types of Artificial neural networks Types of ANN


• ANN consists of multiple neurons arranged in • Feed forward neural network
layers.
• Fully connected neural network
• The networks are varying based on their
structure, activation function used and the • Multi layer perceptron
learning rule used. • Feed back neural network
• In ANN, generally, three are three layers called
input layer, hidden layer and output layer.
• Any ANN would consists of one input layer and
one output layer and zero or any number of
hidden layers.

9
04-11-2023

Feed forward neural network Feed forward neural network


• Simple to design and easy to maintain.
• Simple neural network that consists of • these networks are fast but cannot be used for complex
neurons which are arranged in layers and the learning.
information is propagation only in the forward • Applications: simple classification and image processing
direction. applications

• This model, may or may not contain a hidden


layer and there is no back propagation.
• Based on the number of hidden layers, the
network is classified as single layered and
multi layered feed forward network.

Fully connected Neural network and


Multi layer Perceptron Fully connected network
• In, Fully connected Neural network, all
neurons in a layer are connected to all other
neurons in the next layer.
• In MLP, one input layer and one output layer
and one or more hidden layers.
• Every neuron in the layer are connected to all
neurons in the next layer and thus these
networks are fully connected networks.

10
04-11-2023

Multi layer perceptron Multi layer Perceptron

• The information flows in both directions.


• In the forward direction, the inputs are multiplied
by weights of neurons and forwarded to the
activation function of the neuron and output is
passed to the next layer.
• If the output is incorrect, then in the backward
direction, error is back propagated to adjust the
weights and biases to get correct output. Thus
the network learns with the training data.

Multi layer Perceptron Feed back neural network


• These types of networks are used in deep • These networks have feedback connections
learning for complex classification, speech between neurons that allow information flow
recognition, medical diagnosis, forecasting in both directions in the network.
etc., • The output signals can be sent back to the
• These networks are sloe and complex one. neurons in the same layer or to the neurons in
the preceding layers.
• These network is more dynamic during
training

11
04-11-2023

Feedback neural network

Radial Basis function Neural Network Radial Basis function Neural Network
• RBF was introduced by Broomhead and Lowe • Radial basis functions are
in 1988. • Gaussian RBF
• It is a type of MLP with one input layer, one
output layer and with strictly one hidden layer.
• The hidden layer uses a Non linear activation • Multiquadratic RBF
function called Radial basis function. • The RBFNN networks are useful for function
• This function converts the input parameters approximation, interpolation, time series
into high dimensional space and supports prediction, classification and system control.
network for linear separation.

12
04-11-2023

RBFNN Architecture RBFNN Architecture


• Each RBF function in the hidden layer
• 1.An input layer that feeds the input vector of compares the input x with the center of the
n-dimension to the network(x1,x2,….xn) neuron which is bell curve and outputs the
• 2.A hidden layer that comprises m nonlinear similarity value between 0 and 1values .
radial basis function neurons where m>=n. • If input x is equal to the neuron centre ,then
• The Gaussian RBF is used in hidden layer. the output is 1 but as the difference increases
• The output of a hidden layer neuron for an ,the activation value or the output of the
input vector x is given as neuron falls off exponentially towards 0.

RBFNN Architecture Architecture of RBFNN


• 3.An output layer computes the linear weighted
sum of the output of each neuron from the
hidden layer neurons.

• Hi(x) is the output of the hidden layer neuron i for


an input vector x.

• wi is the weight in the link from the hidden layer


neuron I to the output layer

13
04-11-2023

RBFNN Algorithm RBFNN Algorithm


• Input: Input vector(x1,x2,x3,….xn) • For each node j in the hidden layer ,find the
• Output: Yn
• Assign random weights for every connection from the
centre/receptor c and the variance r.
hidden layer to the output layer in the network in the • x is the input, cj is the centre and r is the
range [-1,1].
• Forward Phase:
radius
• Step1.Calculate input and output in the layer:
• Input is direct transfer function and out of the node • Compute (x-cj)2 applying Euclidean distance
equals the input in the input layer. For any node Ii
• Ii=Xi between x and cj.
• Output equals to Oi=Ii

RBFNN Algorithm Self Organizing Feature Map(SOFM)


• For each node k in the output layer, compute • It is a special type feed forward neural network
linear weighted sum of the output of each developed by Dr.Teuvo Kohenen in 1982.
neuron k from the hidden layer neurons j. • Konenen network is a competitive neural
network and is also called as adaptive learning
• Backward phase: network.
• 1.Train the hidden layer using back • SOFM is an unsupervised model
propagation • It clusters the data by mapping a high
• 2.Update the weights between hidden and dimensional data into two dimensional
output layer. neurons(plane)

14
04-11-2023

SOFM.. Network Architecture and operations


• The model the model learns to cluster of self • The network architecture consists of only two
organize a high dimensional data without layers called the input layer and output layer and
there is no hidden layer.
knowing the membership of the input data.
• The number of units in the input layer is based on
• Hence it is named as SOFM nodes. the length of the input samples which are vector
• These SOFM nodes are also called feature maps. of length n.
• Each connection in the input units to the output
• The mapping is based on the relative distance or units in the output layer is assigned with random
similarity between the points and the points that weights.
are near to each other in the input space are • There is one weight vector of length n associated
mapped to nearby output map units in the SOFM. with each output unit.

SOFM.. SOFM operation


• Output units have intra layer connections with no
weights assigned between these connections but
• The SOFM network operates in two phases, the
used for updating weights.
training phase and the mapping phase.
• During the training phase ,the input layer is fed
with the input samples randomly from the
training data.
• The units of the neurons in the output layer are
initially assigned with some weights. As the input
sample is fed, the output unit computes the
similarity score by Euclidean distance
measurement and compete with each other.

15
04-11-2023

SOFM operation
• The output unit which is close to the input
sample by similarity is chosen as the winning unit
and its connection weights are adjusted by Applications of Neural
learning factor.
• Thus the best matching output unit whose Networks
Applications of Neural
weights are adjusted and are moved close to the
input sample and a topographical map I formed. Networks
• This process is repeated until the map does not
change.
• During the test phase ,the test samples are just
classified.

What is Neural Network?


A neural network is a method in artificial
intelligence that teaches computers to process
data in a way that is inspired by the human brain

16
04-11-2023

In Image Processing : In Speech Processing :

● Face Recognition. ● Speech to Text converter.


● Face Detection. ● Text to Speech converter.
● Noise Removal. ● Speech recognition.
● Script converter.
Large number of pictures are fed into the database
for training a neural network. The collected images This includes the selection of words, the
are further processed for training organization of relevant grammatical forms, and
then the articulation of the resulting sounds by the
motor system using the vocal apparatus.

In Healthcare :
In Defence :
● Disease Recognition.
● CAS (Computer Aided Surgery). ● UAV (Unmanned Areal Vehicle);
● Automated Target Recognition.
Neural networks algorithms can improve the ● Autonomous soldier robots.
performance of dermatologists, cardiologists,
ophthalmologists, and even psychotherapists by Neural networks are used in logistics, armed attack
tracking the development of depression. analysis, and for object location. They are also used
in air patrols, maritime patrol, and for controlling
automated drones.

17
04-11-2023

In Sales and Business: In Automobiles :


● Stock Prediction. ● Self Car drivings.
● Chatbot for quires. ● Speed Recognition.
● Virtual customer support. ● Traffic recognition.

The neural networks in business may be used to Self-driving cars use cameras, radar and lidar
assist marketers to make predictions about the sensors to gather data about the environment around
campaign's results by recognizing patterns from past the vehicle. This data is then processed by the
marketing efforts. An example of this is the Convolutional neural networks (CNN) to identify
personalization of product recommendations on objects such as other vehicles, pedestrians, road
eCommerce sites like Amazon. signs and traffic lights.

18

You might also like