0% found this document useful (0 votes)
382 views

Lecture 2.1.9 Comparison of BNN and ANN

The document compares and contrasts biological neural networks (BNNs) and artificial neural networks (ANNs). While they share basic terminology like nodes/soma, inputs/dendrites, and weights/synapses, they differ significantly in structure, size, functions, learning, and style of computation. BNNs have a 3D structure with nearly unrestricted interconnections between ~1011 neurons. ANNs have a layered structure with restricted connections between 102-104 neurons. BNNs learn from ambiguous data in parallel through pulse timing, while ANNs require more structured data and learn serially through weight adjustments.

Uploaded by

Muskan Gahlawat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
382 views

Lecture 2.1.9 Comparison of BNN and ANN

The document compares and contrasts biological neural networks (BNNs) and artificial neural networks (ANNs). While they share basic terminology like nodes/soma, inputs/dendrites, and weights/synapses, they differ significantly in structure, size, functions, learning, and style of computation. BNNs have a 3D structure with nearly unrestricted interconnections between ~1011 neurons. ANNs have a layered structure with restricted connections between 102-104 neurons. BNNs learn from ambiguous data in parallel through pulse timing, while ANNs require more structured data and learn serially through weight adjustments.

Uploaded by

Muskan Gahlawat
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Comparison of BNN and ANN

Before taking a look at the differences between Artificial Neural Network ANN and
Biological Neural Network BNN, let us take a look at the similarities based on the
terminology between these two.

Similarities

BNN ANN
Soma Node
Dendrites Input
Synapse Weights or Interconnections
Axon Output

Differences

BNN ANN
STRUCTURE
Biological Neural Networks have the Artificial Neural Networks also have the
following following
parts. components that are equivalent to Biological
- Soma present Neural
- Dendrites present Networks.
- Synapse present - Node present
- Axon present - Input present
- Weight present
- Output present
Neurons have three main parts: a central cell The main body of an artificial neuron is
body, called the soma, and two different called a node or unit. They are physically
types of branched, treelike structures that connected to one another by wires that mimic
extend from the the connections between biological neurons
soma, called dendrites and axons.

Information from other neurons, in the form The arrangement and connections of the
of electrical impulses, enters the dendrites at neurons made up the network and have three
connection points called synapses. The layers. The first layer is called the input layer
information flows from the dendrites to the and is the only
soma where it is processed. The output Layer, exposed to external signals. The input
signal, a train of impulses, is then sent down layertransmits signals to the neurons in the
the axon to the synapse of other neurons. next layer, which is called a hidden layer.
The hidden layer extracts relevant features or
patterns from the received signals. Those
features or patterns that are considered
important are then directed to the output
layer, which is the final layer of the network
Layers
Biological neural networks are constructed in In the Artificial neural networks , this is not
a three dimensional way from microscopic true. These are the simple clustering of the
components. These neurons seem capable of primitive artificial neurons. This clustering
nearly unrestricted interconnections occurs by crea
ting layers, which are then connected to one
another. The layers connection may also
vary. Basically, all artificial neural networks
have a similar structure of topology
Size
1011 neurons 102– 104neurons

Functions
Dendrites provide input signals to the cells. The artificial neuron receives inputs
- A synapse is able to increase or decrease the analogous to the electrochemical impulses
strength of the connection. This is where the dendrites of biological neurons receive
information is stored. from another neuron.
- The axon sends output signal to another - The artificial signals can be changed by
cell. The axon terminals merge with the weights in a manner similar to the physical
dendrites of the other cells. changesthat occur in the synapses.
- The brain is made up of a great number of - The output of the artificial neuron
components (about 1011), each of which is corresponds to signals sent out from
connected to many other components (about biological neuron over itsaxon.
104), each of which performs some relatively - In connectionism, neurons are connected
simple computation, whose nature is unclear, randomly or uniformly, and all neurons
in slow fashion connections perform the same computation. Each
connection has associated
with it a numerical weight. Each neuron’s
output isa single numerical activity which is
computed as amonotonic function of the sum
of the products of the activity of the input
neurons with their corresponding
connection weights.

Learning
They are able to tolerate ambiguity, to learn More precisely formatted and structured data
on the basis of very impoverished and and rules are required.
disorganized data [for example, to learn a - They also learn from past experience to
grammar from random samples of poorly improve their own performance levels.
structured, unsystematic, natural language. - This is true of Artificial Neural Networks as
- They learn from past experience to improve well.
their own performance levels.
- Learning in biological systems involves
adjustments to the synaptic connections that
exist
between the neurones.

Style of computation
Biological neural networks communicate Artificial neural networks are based on
through pulses, the timing of the pulses to computational model involving the
transmit information and perform propagation of continuous variable from one
computation. (Parallel and processing unit to the
distributed) next. (Serial and centralized).

Characteristics of neural networks terminology

Processing unit: We can consider an artificial neural network (ANN) as a highly simplified
model of the structure of the biological neural network. An ANN consists of interconnected
processing units. The general model of a processing unit consists of a summing part
followed by an output part. The summing part receives N input values, weights each value,
and computes a weighted sum. The weighted sum is called the activation value. The output
part produces a signal from the activation value. The sign of the weight for each input
determines whether the input is excitatory (positive weight) or inhibitory (negative weight).
The inputs could be discrete or continuous data values, and likewise the outputs also could be
discrete or continuous. The input and output could also be deterministic or stochastic or
fuzzy.
Interconnections: In an artificial neural network several processing units are interconnected
according to some topology to accomplish a pattern recognition task. Therefore the inputs to
a processing unit may come from the outputs of other processing units, and/or from external
sources. The output of each unit may be given to several units including itself. The amount of
the output of one unit received by another unit depends on the strength of the connection
between the units, and it is reflected in the weight value associated with the connecting link.
If there are N units in a given ANN, then at any instant of time each unit will have a unique
activation value and a unique output value. The set of the N activation values of the network
defines the activation state of the network at that instant. Likewise, the set of the N output
values of the network defines the output state of the network at that instant. Depending on the
discrete or continuous nature of the activation and output values, the state of the network can
be described by a discrete or continuous point in an N-dimensional space.
Operations: In operation, each unit of an ANN receives inputs from other connected units
and/or from an external source. A weighted sum of the inputs is computed at a given instant
of time. The activation value determines the actual output from the output function unit, i.e.,
the output state of the unit. The output values and other external inputs in turn determine the
activation and output states of the other units. Activation dynamics determines the activation
values of all the units, i.e., the activation state of the network as a function of time. The
activation dynamics also determines the dynamics of the output state of the network. The set
of all activation states defines the activation state space of the network. The set of all output
states defines the output state space of the network. Activation dynamics determines the
trajectory of the path of the states in the state space of the network. For a given network,
defined by the units and their interconnections with appropriate weights, the activation states
determine the short term memory function of the network. Generally, given an external input,
the activation dynamics is followed to recall a pattern stored in a network. In order to store a
pattern in a network, it is necessary to adjust the weights of the connections in the network.
The set of all weights on all connections in a network form a weight vector. The set of all
possible weight vectors define the weight space. When the weights are changing, then the
synaptic dynamics of the network determines the weight vector as a function of time.
Synaptic dynamics is followed to adjust the weights in order to store the given patterns in the
network. The process of adjusting the weights is referred to as learning. Once the learning
process is completed, the final set of weight values corresponds to the long term memory
function of the network. The procedure to incrementally update each of the weights is called
a learning law or learning algorithm.
Update: In implementation, there are several options available for both activation and
synaptic dynamics. In particular, the updating of the output states of all the units could be
performed synchronously. In this case, the activation values of all the units are computed at
the same time, assuming a given output state throughout. From the activation values, the new
output state of the network is derived. In an asynchronous update, on the other hand, each
unit is updated sequentially, taking the current output state of the network into account each
time. For each unit, the output state can be determined from the activation value either
deterministically or stochastically. In practice, the activation dynamics, including the update,
is much more complex in a biological neural network than the simple models mentioned
above. The ANN models along with the equations governing the activation and synaptic
dynamics are designed according to the pattern recognition task to be handled.

Characteristics of ANN
• Adaptive learning
• Self-organization
• Real-time operation
• Fault tolerance via redundant information coding
• Massive parallelism
• Learning and generalizing ability
• Distributed representation

Applications of ANN
• Artificial neural network applications have been used in the field of solar energy for
modeling and design of a solar steam generating plant.
• They are useful in system modeling, such as in implementing complex mapping and
system identification.
• ANN are used for the estimation of heating-loads of buildings, parabolic-trough
collector’s intercept factor and local concentration ratio.
• ANN are used in diverse applications in control, robotics, pattern recognition,
forecasting, medicine, power systems, manufacturing, optimization, signal processing,
and social/psychological sciences.
• They have also been used for the prediction of air flows in a naturally ventilated test
room and for the prediction of the energy consumption of solar buildings.
• They are able to handle noisy and incomplete data and also able to deal with non-
linear problems
• The use of artificial neural-networks in ventilating and air-conditioning systems,
refrigeration, modeling, heating, load-forecasting, control of power-generation
systems and solar radiation.

You might also like