0% found this document useful (0 votes)
70 views

Biological Neuron Artificial Neuron

this document is about artifical neural networks basics. a basic overview. document is from B.I.T., MESRA

Uploaded by

Mohit Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
70 views

Biological Neuron Artificial Neuron

this document is about artifical neural networks basics. a basic overview. document is from B.I.T., MESRA

Uploaded by

Mohit Sharma
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Biological Neuron Artificial Neuron

• Bio-ANN [https://www.tutorialspoint.com/artificial_intelligence/
artificial_intelligence_neural_networks.htm] [https://ujjwalkarn.
me/2016/08/09/quick-intro-neural-networks/]
• Activation functions [https://en.wikipedia.org/wiki/
Activation_function]
• Layer of Neurons []
• Role of Bias- [https://stackoverflow.com/questions/2480650/
role-of-bias-in-neural-networks]
• McCulloch Pitts Model (unsup) [https://
machinelearningknowledge.ai/mcculloch-pitts-neuron-model/]-
• Perceptron (sup) []
• Learning: Supervised/Unsupervised/Reinforcement [https://www.
tutorialspoint.com/artificial_intelligence/
artificial_intelligence_neural_networks.htm]
• Applications of Neural Network
• ANN learning methods
• Desirable properties of ANN- stability, plasticity
• Introduction to Back Propagation Networks,
Biological NN Vs ANN
• Characteristic abilities of Biological
neural systems
– pattern recognition
– perception
– motor control
– Memorize
– Learn
– Generalize
• Components
– Neurons - basic building blocks of
biological neural systems are
nerve cells, referred to as
– Synapses – interconnection
between the axon of one neuron
and a dendrite of another neuron
• Algorithmic models of the features
of biological neural systems are
called “artificial neural networks
(ANN)”
Order of 10-500 billion neurons in the human cortex, with 60 trillion synapses.
Arranged in approximately 1000 main modules, each with 500 neural
networks.
Types of Learning in ANN

• FeedForward ANN • Feedback ANN


– the information flow is – feedback loops are
unidirectional. allowed.
– A unit sends
information to other – Application: content
unit from which it does addressable memories
not receive any
information.
– There are no feedback
loops.
– Application: pattern
generation/recognition/
classification.
More Types of ANNs
• Single-layer NNs, such as the Hopfield network;
• Multilayer feedforward NNs, including, for example,
standard backpropagation, functional link and product
unit networks;
• Temporal NNs, such as the Elman and Jordan simple
recurrent networks as well as time-delay neural networks;
• Self-organizing NNs, such as the Kohonen self-
organizing feature maps and the learning vector
quantizer;
• Combined feedforward and self-organizing NNs, such as
the radial basis function networks.
Single Neuron

• Components
– X1, X2: Numerical Input
–  f  is non-linear and is called the Activation Function - takes a
single number and performs a certain fixed mathematical operation on it
– 1: Bias with weight b .
Activation Fuction
[Ref- Engelbrecht Andries P., Computational Intelligence: An Introduction, Wiley]

Linear : produces a Step: takes a real- Ramp: is a combination of


linearly modulated valued input and the linear and step functions
output, whereß is a squashes it to the
constant. rangeß1[ ß2, ], binary or
bipolar. f (net -) = ß if (net- ≥ ß)
f (net -)= ß(net - ) f (net -)= ß1 if (net ≥ ) = net - if net -ß
= ß2 if (net <) = -ß if (net- < ß)
Activation Fuction
[Ref- Engelbrecht Andries P., Computational Intelligence: An Introduction, Wiley]

Sigmoid: takes a real- tanh: takes a real- ReLU (Rectified Linear


valued input and valued input and Unit): takes a real-valued
squashes it to range (0, 1) squashes it to the input and thresholds it at
. range [-1, 1]. zero (replaces negative
ß controls the steepness. ß controls the values with zero)
steepness.
f(x) = max(0, x)
Layers of Neurons: (e.g. in Feedforward NN)

• Input nodes
–No computation is performed in any of the Input nodes
–They just pass on the information to the hidden nodes
• Hidden nodes
–They perform computations and transfer information from the input
nodes to the output nodes.
–There can be zero or multiple hidden layers
• Output nodes
–Responsible for computations and transferring information from the
network to the outside world
–One output node for one decision parameter
McCulloch-Pitts-neuron

• First ever primitive model of biological neuron was conceptualized by Warren


Sturgis McCulloch  and Walter Harry Pitts in 1943
• Elements-
– Neuron: computational in which the input signals are computed and an output is fired
• Summation Function- This simply calculates the sum of incoming
inputs(excitatory).
• Activation Function - Essentially this is the step function  which sees if the
summation is more than equal to a preset Threshold value , if yes then neuron
should fire (i.e. output =1 ) if not the neuron should not fire (i.e. output =0).
– Neuron fires: Output =1 , if Summation  >= Threshold
– Neuron does not fires: Output =0 , if Summation < Threshold
– Excitatory Input : This is an incoming binary signals to neuron, which can have only
two values 0 (=OFF) or 1 (=ON)
– Inhibitory Input : If this input is on, this will now allow neuron to fire , even if there are
other excitatory inputs which are on.
– Output : The value of 0 indicates that the neuron does not fire, the value of 1 indicates
the neuron does fire.
Function of McCulloch-Pitts Model
• Design-
– McCulloch-Pitts neuron model can be used to compute some
simple functions which involves binary input and output. 
• Steps -
– The input signals are switched on and the neuron is activated.
– If Neuron detects that Inhibitory input is switched on, the
output is straightaway zero, which means the neuron does
not fire.
– If there is no Inhibitory input, then neuron proceeds to
calculate the sum of number of excitatory inputs that are
switched on.
– If this sum is greater than equal to the preset threshold value,
the neuron fires (output=1) , otherwise the neuron does not
fire (output=0)
Illustration of McCulloch-Pitts Model

https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
Illustration of McCulloch-Pitts Model
Design of McCulloch-Pitts Neuron
for AND Function
• For the neuron to fire, both excitatory input signals have
to be enabled.
• So it is very intuitive that the threshold value should be 2 .
• Additionally if the inhibitory input is on, then irrespective
of any other input, the neuron will not fire.

https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
Design of McCulloch-Pitts Neuron
for OR Function
• For the neuron to fire, at least 1 excitatory input signals
has to be enabled.
• So it is very intuitive that the threshold value should be 1 .
• Additionally if the inhibitory input is on, then irrespective
of any other input, the neuron will not fire.

https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
Design of McCulloch-Pitts Neuron
forReal Life Decision Making
• Problem -You like going to for a particular movie if it is a new
release. But you watch a movie if the ticket price is cheap.
Further you cant plan movie on your weekdays as they are busy
• Design
– Excitatory Inputs
• X1- IsMovieNew
• X2- IsTicketCheap
– Output Function : AND (since Ouput is 1 only if both X1 and X2 are 1)
– Inhibitory
• IsWeekday  : If it is on for the neuron, then there is no question of planning for the movie.

https://machinelearningknowledge.ai/mcculloch-pitts-neuron-model/
Limitation of McCulloch-Pitts Model

• There is No (machine) learning in this


model
• This model was not built to work as
machine learning model in the first place.
• Rather McCulloch and Pitts just wanted to
build a mathematical model to represent
the workings of biological neuron.
• But this humble looking model actually
inspired other researchers to come up with
true machine learning based neural models
in the later years
Learning Methods
• Supervised Learning
–The model is trained using examples of expected output values for each
input combination.
–For example, pattern recognizing. The ANN comes up with a guess for
given input vector, then compares the guess with the corresponding
“correct” output value and makes adjustments in weights according to
errors.
• Unsupervised Learning 
–It is required when there is no example data set with known answers.
–For example, searching for a hidden pattern. In this case, clustering i.e.
dividing a set of elements into groups according to some unknown
pattern is carried out based on the existing data sets present.
• Reinforcement Learning 
–This strategy is built on observation.
–In this method, the ANN makes a decision by observing its environment.
If the observation is negative, the network adjusts its weights to be able
to make a different required decision the next time.
Applications of ANN
• Classification - where the aim is to predict the class of
an input vector;
• Pattern matching - where the aim is to produce a
pattern best associated with a given input vector;
• Pattern completion - where the aim is to complete the
missing parts of a given input vector;
• Optimization-where the aim is to find the optimal values
of parameters in an optimization problem;
• Control - where, given an input vector, an appropriate
action is suggested;
• Function approximation/times series modeling - where
the aim is to learn the functional relationships between
input and desired output vectors;
• Data mining - with the aim of discovering hidden
patterns from data – also referred to as knowledge
discovery.

You might also like