0% found this document useful (0 votes)
3 views

Unit1.2-OOMDUML

The document discusses the structure and function of biological neurons and their influence on artificial neural networks (ANNs). It explains how artificial neurons mimic biological neurons, including the roles of dendrites, soma, axon, and synapses in processing and transmitting signals. Additionally, it covers the McCulloch-Pitts neuron model, which serves as a foundational concept for understanding neural networks and their operations, including boolean functions and limitations.

Uploaded by

aryan.it.22
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Unit1.2-OOMDUML

The document discusses the structure and function of biological neurons and their influence on artificial neural networks (ANNs). It explains how artificial neurons mimic biological neurons, including the roles of dendrites, soma, axon, and synapses in processing and transmitting signals. Additionally, it covers the McCulloch-Pitts neuron model, which serves as a foundational concept for understanding neural networks and their operations, including boolean functions and limitations.

Uploaded by

aryan.it.22
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Neural Networks

Subtitle
BIOLOGICAL NEURON

2
Simplified illustration of human response to a
stimulus

3
Biological neuron
▪ The most fundamental unit of deep
neural networks is called an artificial
neuron/perceptron.
▪ But the very first step towards the
perceptron we use today was taken in
1943 by McCulloch and Pitts, by
mimicking the functionality of a
biological neuron.

4
Biological neuron
▪ A tiny piece of the brain, about the size
of grain of rice, contains over 10,000
neurons, each of which forms an
average of 6,000 connections with other
neurons.
▪ The neuron is optimized to receive
information from other neurons, process
this information in a unique way, and
send its result to other cells.

5
Biological neuron
▪ Dendrite: Receives signals from other neurons

▪ Soma: Processes the information

▪ Axon: Transmits the output of this neuron

▪ Synapse: Point of connection to other neurons

6
Biological neuron
▪ The neuron receives its inputs along antennae-like
structures called dendrites.
▪ Each of these incoming connections is dynamically
strengthened or weakened based on how often it is used.
▪ It’s the strength of each connection that determines
the contribution of the input to the neuron’s output.
▪ After being weighted by the strength of their respective
connections, the inputs are summed together in the cell
body.
▪ This sum is then transformed into a new signal that’s
propagated along the cell’s axon and sent off to other
neurons.
7
Artificial neural network
▪ In fact, the human brain is a highly complex structure viewed as a massive, highly
interconnected network of simple processing elements called neurons.
▪ Artificial neural networks (ANNs) or simply we refer it as neural network (NNs),
which are simplified models (i.e. imitations) of the biological nervous system, and
obviously, therefore, have been motivated by the kind of computing performed by
the human brain.
▪ The behavior of a biolgical neural network can be captured by a simple model
called artificial neural network.
ANN
▪ We may note that a neutron is a part of an interconnected network of nervous
system and serves the following.
▪ Compute input signals
▪ Transportation of signals (at a very high speed)
▪ Storage of information
▪ Perception, automatic training and learning

▪ We also can see the analogy between the biological neuron and artificial neuron.
Truly, every component of the model (i.e. artificial neuron) bears a direct analogy
to that of a biological neuron. It is this model which forms the basis of neural
network (i.e. artificial neural network).
ANN
▪ Note that, a biological neuron receives all inputs through the dendrites, sums
them and produces an output if the sum is greater than a threshold value.
▪ The input signals are passed on to the cell body through the synapse, which may
accelerate or retard an arriving signal.
▪ It is this acceleration or retardation of the input signals that is modeled by the
weights.
▪ An effective synapse, which transmits a stronger signal will have a
correspondingly larger weights while a weak synapse will have smaller weights.
▪ Thus, weights here are multiplicative factors of the inputs to account for the
strength of the synapse.
ANN
▪ Hence, the total input say I received by the soma of the artificial neuron is

▪ To generate the final output y, the sum is passed to a filter φ called transfer
function, which releases the output.
▪ That is,
ANN
▪ A very commonly known transfer function is the thresholding function.
▪ In this thresholding function, sum (i.e. I) is compared with a threshold value θ.
▪ If the value of I is greater than θ, then the output is 1 else it is 0 (this is just like a
simple linear filter).
Mc-Culloch Pitts Neuron

14
Mc-Culloch Pitts Neuron
▪ The first computational model of a neuron was proposed by Warren MuCulloch
(neuroscientist) and Walter Pitts (logician) in 1943.

▪ It may be divided into 2 parts. The first part, g takes an input, performs an
aggregation and based on the aggregated value the second part, f makes a
decision.
15
Mc-Culloch Pitts Neuron
▪ Lets suppose that I want to predict my own decision,
whether to watch a random football game or not on TV.
▪ The inputs are all boolean i.e., {0,1} and my output
variable is also boolean {0: Won’t watch it, 1: Will watch it}.
▪ So, x1 could be isPremierLeagueOn (I like Premier
League more)
▪ x2 could be isItAFriendlyGame (I tend to care less about
the friendlies)
▪ x3 could be isNotHome (Can’t watch it when I’m running
errands. Can I?)
▪ x4 could be isManUnitedPlaying (I am a big Man United
fan. GGMU!) and so on.

16
Mc-Culloch Pitts Neuron
▪ These inputs can either be excitatory or
inhibitory.
▪ Inhibitory inputs are those that have maximum
effect on the decision making irrespective of
other inputs i.e., if x_3 is 1 (not home) then my
output will always be 0 i.e., the neuron will never
fire, so x_3 is an inhibitory input.
▪ Excitatory inputs are NOT the ones that will make
the neuron fire on their own but they might fire it
when combined together. Formally, this is what is
going on:

17
Mc-Culloch Pitts Neuron
▪ We can see that g(x) is just doing a sum of the
inputs — a simple aggregation.
▪ And theta here is called thresholding parameter.
▪ For example, if I always watch the game when the
sum turns out to be 2 or more, the theta is 2 here.
This is called the Thresholding Logic.

By: Prof. Priya Kaul 18


Mc-Culloch Pitts Neuron

By: Prof. Priya Kaul 19


Boolean Functions Using M-P Neuron

By: Prof. Priya Kaul 20


M-P Neuron: A Concise Representation

▪ This representation just denotes that, for the boolean inputs x_1, x_2 and x_3 if
the g(x) i.e., sum ≥ theta, the neuron will fire otherwise, it won’t.

By: Prof. Priya Kaul 21


AND Function

▪ An AND function neuron would only fire when ALL the inputs are ON i.e., g(x) ≥ 3
here.

By: Prof. Priya Kaul 22


OR Function

▪ An OR function neuron would fire if ANY of the inputs is ON i.e., g(x) ≥ 1 here.

By: Prof. Priya Kaul 23


NOR Function

▪ For a NOR neuron to fire, we want ALL the inputs to be 0 so the thresholding
parameter should also be 0 i.e the sum of inputs= theta = 0
▪ circle at the end indicates inhibitory input: if any inhibitory input is 1 the output will be
0

By: Prof. Priya Kaul 24


NOR Function

▪ For a NOR neuron to fire, we want ALL the inputs to be 0 so the thresholding
parameter should also be 0 i.e the sum of inputs= theta = 0
▪ circle at the end indicates inhibitory input: if any inhibitory input is 1 the output will be
0

By: Prof. Priya Kaul 25


NOT Function

▪ For a NOT neuron, 1 outputs 0 and 0 outputs 1.


▪ Sum of unputs==theta=0
▪ Inhibitory input

By: Prof. Priya Kaul 26


NAND Function

▪ For a NAND neuron to fire, we want any one of the inputs to be 0 so the thresholding
parameter should be 1
▪ Sum of inputs<= theta

By: Prof. Priya Kaul 27


Limitations Of M-P Neuron
▪ What about non-boolean (say, real) inputs?
▪ Boolean and Non-Boolean functions which are
non-linearly seperable
▪ Do we always need to hand code the threshold?
▪ Are all inputs equal? What if we want to assign
more importance to some inputs

By: Prof. Priya Kaul 28

You might also like