SlideShare a Scribd company logo
Neural networks are parallel computing devices, which is basically an attempt to make a
computer model of the brain. The main objective is to develop a system to perform various
computational tasks faster than the traditional systems. These tasks include pattern
recognition and classification, approximation, optimization, and data clustering.
What is Artificial Neural Network?
Artificial Neural Network (ANN) is an efficient computing system whose central theme is
borrowed from the analogy of biological neural networks. ANNs are also named as “artificial
neural systems,” or “parallel distributed processing systems,” or “connectionist systems.”
ANN acquires a large collection of units that are interconnected in some pattern to allow
communication between the units. These units, also referred to as nodes or neurons, are
simple processors which operate in parallel.
Every neuron is connected with other neuron through a connection link. Each connection link
is associated with a weight that has information about the input signal. This is the most useful
information for neurons to solve a particular problem because the weight usually excites or
inhibits the signal that is being communicated. Each neuron has an internal state, which is
called an activation signal. Output signals, which are produced after combining the input
signals and activation rule, may be sent to other units.
A Brief History of ANN
The history of ANN can be divided into the following three eras −
ANN during 1940s to 1960s
Some key developments of this era are as follows −
● 1943 − It has been assumed that the concept of neural network started with the work
of physiologist, Warren McCulloch, and mathematician, Walter Pitts, when in 1943
they modeled a simple neural network using electrical circuits in order to describe
how neurons in the brain might work.
● 1949 − Donald Hebb’s book, The Organization of Behavior, put forth the fact that
repeated activation of one neuron by another increases its strength each time they are
used.
● 1956 − An associative memory network was introduced by Taylor.
● 1958 − A learning method for McCulloch and Pitts neuron model named Perceptron
was invented by Rosenblatt.
● 1960 − Bernard Widrow and Marcian Hoff developed models called "ADALINE" and
“MADALINE.”
ANN during 1960s to 1980s
Some key developments of this era are as follows −
● 1961 − Rosenblatt made an unsuccessful attempt but proposed the “backpropagation”
scheme for multilayer networks.
● 1964 − Taylor constructed a winner-take-all circuit with inhibitions among output
units.
● 1969 − Multilayer perceptron (MLP) was invented by Minsky and Papert.
● 1971 − Kohonen developed Associative memories.
● 1976 − Stephen Grossberg and Gail Carpenter developed Adaptive resonance theory.
ANN from 1980s till Present
Some key developments of this era are as follows −
● 1982 − The major development was Hopfield’s Energy approach.
● 1985 − Boltzmann machine was developed by Ackley, Hinton, and Sejnowski.
● 1986 − Rumelhart, Hinton, and Williams introduced Generalised Delta Rule.
● 1988 − Kosko developed Binary Associative Memory (BAM) and also gave the
concept of Fuzzy Logic in ANN.
The historical review shows that significant progress has been made in this field. Neural
network based chips are emerging and applications to complex problems are being
developed. Surely, today is a period of transition for neural network technology.
Biological Neuron
A nerve cell (neuron) is a special biological cell that processes information. According to an
estimation, there are huge number of neurons, approximately 1011
with numerous
interconnections, approximately 1015
.
Schematic Diagram
Working of a Biological Neuron
As shown in the above diagram, a typical neuron consists of the following four parts with the
help of which we can explain its working −
● Dendrites − They are tree-like branches, responsible for receiving the information
from other neurons it is connected to. In other sense, we can say that they are like the
ears of neuron.
● Soma − It is the cell body of the neuron and is responsible for processing of
information, they have received from dendrites.
● Axon − It is just like a cable through which neurons send the information.
● Synapses − It is the connection between the axon and other neuron dendrites.
ANN versus BNN
Before taking a look at the differences between Artificial Neural Network (ANN) and
Biological Neural Network (BNN), let us take a look at the similarities based on the
terminology between these two.
Biological Neural Network (BNN) Artificial Neural Network (ANN)
Soma Node
Dendrites Input
Synapse Weights or Interconnections
Axon Output
The following table shows the comparison between ANN and BNN based on some criteria
mentioned.
Criteria BNN ANN
Processing
Massively parallel, slow but
superior than ANN
Massively parallel, fast but inferior than BNN
Size
1011
neurons and 1015
interconnections
102
to 104
nodes (mainly depends on the type
of application and network designer)
Learning They can tolerate ambiguity
Very precise, structured and formatted data is
required to tolerate ambiguity
Fault
tolerance
Performance degrades with
even partial damage
It is capable of robust performance, hence has
the potential to be fault tolerant
Storage
capacity
Stores the information in the
synapse
Stores the information in continuous memory
locations
Model of Artificial Neural Network
The following diagram represents the general model of ANN followed by its processing.
For the above general model of artificial neural network, the net input can be calculated as
follows −
$$y_{in}:=:x_{1}.w_{1}:+:x_{2}.w_{2}:+:x_{3}.w_{3}:dotso: x_{m}.w_{m}$$
i.e., Net input $y_{in}:=:sum_i^m:x_{i}.w_{i}$
The output can be calculated by applying the activation function over the net input.
$$Y:=:F(y_{in}) $$
Output = function (net input calculated)
Network Topology
A network topology is the arrangement of a network along with its nodes and connecting
lines. According to the topology, ANN can be classified as the following kinds −
Feedforward Network
It is a non-recurrent network having processing units/nodes in layers and all the nodes in a
layer are connected with the nodes of the previous layers. The connection has different
weights upon them. There is no feedback loop means the signal can only flow in one
direction, from input to output. It may be divided into the following two types −
● Single layer feedforward network − The concept is of feedforward ANN having
only one weighted layer. In other words, we can say the input layer is fully connected
to the output layer.
● Multilayer feedforward network − The concept is of feedforward ANN having
more than one weighted layer. As this network has one or more layers between the
input and the output layer, it is called hidden layers.
Feedback Network
As the name suggests, a feedback network has feedback paths, which means the signal can
flow in both directions using loops. This makes it a non-linear dynamic system, which
changes continuously until it reaches a state of equilibrium. It may be divided into the
following types −
● Recurrent networks − They are feedback networks with closed loops. Following are
the two types of recurrent networks.
● Fully recurrent network − It is the simplest neural network architecture because all
nodes are connected to all other nodes and each node works as both input and output.
● Jordan network − It is a closed loop network in which the output will go to the input
again as feedback as shown in the following diagram.
Adjustments of Weights or Learning
Learning, in artificial neural network, is the method of modifying the weights of connections
between the neurons of a specified network. Learning in ANN can be classified into three
categories namely supervised learning, unsupervised learning, and reinforcement learning.
Supervised Learning
As the name suggests, this type of learning is done under the supervision of a teacher. This
learning process is dependent.
During the training of ANN under supervised learning, the input vector is presented to the
network, which will give an output vector. This output vector is compared with the desired
output vector. An error signal is generated, if there is a difference between the actual output
and the desired output vector. On the basis of this error signal, the weights are adjusted until
the actual output is matched with the desired output.
Unsupervised Learning
As the name suggests, this type of learning is done without the supervision of a teacher. This
learning process is independent.
During the training of ANN under unsupervised learning, the input vectors of similar type are
combined to form clusters. When a new input pattern is applied, then the neural network
gives an output response indicating the class to which the input pattern belongs.
There is no feedback from the environment as to what should be the desired output and if it is
correct or incorrect. Hence, in this type of learning, the network itself must discover the
patterns and features from the input data, and the relation for the input data over the output.
Reinforcement Learning
As the name suggests, this type of learning is used to reinforce or strengthen the network over
some critic information. This learning process is similar to supervised learning, however we
might have very less information.
During the training of network under reinforcement learning, the network receives some
feedback from the environment. This makes it somewhat similar to supervised learning.
However, the feedback obtained here is evaluative not instructive, which means there is no
teacher as in supervised learning. After receiving the feedback, the network performs
adjustments of the weights to get better critic information in future.
Activation Functions
It may be defined as the extra force or effort applied over the input to obtain an exact output.
In ANN, we can also apply activation functions over the input to get the exact output.
Followings are some activation functions of interest −
Linear Activation Function
It is also called the identity function as it performs no input editing. It can be defined as −
$$F(x):=:x$$
Sigmoid Activation Function
It is of two type as follows −
● Binary sigmoidal function − This activation function performs input editing between
0 and 1. It is positive in nature. It is always bounded, which means its output cannot
be less than 0 and more than 1. It is also strictly increasing in nature, which means
more the input higher would be the output. It can be defined as
$$F(x):=:sigm(x):=:frac{1}{1:+:exp(-x)}$$
● Bipolar sigmoidal function − This activation function performs input editing
between -1 and 1. It can be positive or negative in nature. It is always bounded, which
means its output cannot be less than -1 and more than 1. It is also strictly increasing in
nature like sigmoid function. It can be defined as
$$F(x):=:sigm(x):=:frac{2}{1:+:exp(-x)}:-:1:=:frac{1:-:exp(x)}{1:+:exp(x
)}$$
As stated earlier, ANN is completely inspired by the way biological nervous system, i.e. the
human brain works. The most impressive characteristic of the human brain is to learn, hence
the same feature is acquired by ANN.
What Is Learning in ANN?
Basically, learning means to do and adapt the change in itself as and when there is a change
in environment. ANN is a complex system or more precisely we can say that it is a complex
adaptive system, which can change its internal structure based on the information passing
through it.
Why Is It important?
Being a complex adaptive system, learning in ANN implies that a processing unit is capable
of changing its input/output behavior due to the change in environment. The importance of
learning in ANN increases because of the fixed activation function as well as the input/output
vector, when a particular network is constructed. Now to change the input/output behavior,
we need to adjust the weights.
Classification
It may be defined as the process of learning to distinguish the data of samples into different
classes by finding common features between the samples of the same classes. For example, to
perform training of ANN, we have some training samples with unique features, and to
perform its testing we have some testing samples with other unique features. Classification is
an example of supervised learning.
Neural Network Learning Rules
We know that, during ANN learning, to change the input/output behavior, we need to adjust
the weights. Hence, a method is required with the help of which the weights can be modified.
These methods are called Learning rules, which are simply algorithms or equations.
Following are some learning rules for the neural network −
Hebbian Learning Rule
This rule, one of the oldest and simplest, was introduced by Donald Hebb in his book The
Organization of Behavior in 1949. It is a kind of feed-forward, unsupervised learning.
Basic Concept − This rule is based on a proposal given by Hebb, who wrote −
“When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes
part in firing it, some growth process or metabolic change takes place in one or both cells
such that A’s efficiency, as one of the cells firing B, is increased.”
From the above postulate, we can conclude that the connections between two neurons might
be strengthened if the neurons fire at the same time and might weaken if they fire at different
times.
Mathematical Formulation − According to Hebbian learning rule, following is the formula
to increase the weight of connection at every time step.
$$Delta w_{ji}(t):=:alpha x_{i}(t).y_{j}(t)$$
Here, $Delta w_{ji}(t)$ ⁡
= increment by which the weight of connection increases at time
step t
$alpha$ = the positive and constant learning rate
$x_{i}(t)$ = the input value from pre-synaptic neuron at time step t
$y_{i}(t)$ = the output of pre-synaptic neuron at same time step t
Perceptron Learning Rule
This rule is an error correcting the supervised learning algorithm of single layer feedforward
networks with linear activation function, introduced by Rosenblatt.
Basic Concept − As being supervised in nature, to calculate the error, there would be a
comparison between the desired/target output and the actual output. If there is any difference
found, then a change must be made to the weights of connection.
Mathematical Formulation − To explain its mathematical formulation, suppose we have ‘n’
number of finite input vectors, x(n), along with its desired/target output vector t(n), where n =
1 to N.
Now the output ‘y’ can be calculated, as explained earlier on the basis of the net input, and
activation function being applied over that net input can be expressed as follows −
$$y:=:f(y_{in}):=:begin{cases}1, & y_{in}:>:theta 0, &
y_{in}:leqslant:thetaend{cases}$$
Where θ is threshold.
The updating of weight can be done in the following two cases −
Case I − when t ≠ y, then
$$w(new):=:w(old):+;tx$$
Case II − when t = y, then
No change in weight
Delta Learning Rule (Widrow-Hoff Rule)
It is introduced by Bernard Widrow and Marcian Hoff, also called Least Mean Square (LMS)
method, to minimize the error over all training patterns. It is kind of supervised learning
algorithm with having continuous activation function.
Basic Concept − The base of this rule is gradient-descent approach, which continues forever.
Delta rule updates the synaptic weights so as to minimize the net input to the output unit and
the target value.
Mathematical Formulation − To update the synaptic weights, delta rule is given by
$$Delta w_{i}:=:alpha:.x_{i}.e_{j}$$
Here $Delta w_{i}$ = weight change for ith
⁡
pattern;
$alpha$ = the positive and constant learning rate;
$x_{i}$ = the input value from pre-synaptic neuron;
$e_{j}$ = $(t:-:y_{in})$, the difference between the desired/target output and the actual
output ⁡
$y_{in}$
The above delta rule is for a single output unit only.
The updating of weight can be done in the following two cases −
Case-I − when t ≠ y, then
$$w(new):=:w(old):+:Delta w$$
Case-II − when t = y, then
No change in weight
As the name suggests, supervised learning takes place under the supervision of a teacher.
This learning process is dependent. During the training of ANN under supervised learning,
the input vector is presented to the network, which will produce an output vector. This output
vector is compared with the desired/target output vector. An error signal is generated if there
is a difference between the actual output and the desired/target output vector. On the basis of
this error signal, the weights would be adjusted until the actual output is matched with the
desired output.
Perceptron
Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is the basic
operational unit of artificial neural networks. It employs supervised learning rule and is able
to classify the data into two classes.
Operational characteristics of the perceptron: It consists of a single neuron with an arbitrary
number of inputs along with adjustable weights, but the output of the neuron is 1 or 0
depending upon the threshold. It also consists of a bias whose weight is always 1. Following
figure gives a schematic representation of the perceptron.
Perceptron thus has the following three basic elements −
● Links − It would have a set of connection links, which carries a weight including a
bias always having weight 1.
● Adder − It adds the input after they are multiplied with their respective weights.
● Activation function − It limits the output of neuron. The most basic activation
function is a Heaviside step function that has two possible outputs. This function
returns 1, if the input is positive, and 0 for any negative input.
Training Algorithm
Perceptron network can be trained for single output unit as well as multiple output units.
Training Algorithm for Single Output Unit
Step 1 − Initialize the following to start the training −
● Weights
● Bias
● Learning rate $alpha$
For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning
rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every training vector x.
Step 4 − Activate each input unit as follows −
$$x_{i}:=:s_{i}:(i:=:1:to:n)$$
Step 5 − Now obtain the net input with the following relation −
$$y_{in}:=:b:+:displaystylesumlimits_{i}^n x_{i}.:w_{i}$$
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output.
$$f(y_{in}):=:begin{cases}1 & if:y_{in}:>:theta0 & if :
-theta:leqslant:y_{in}:leqslant:theta-1 & if:y_{in}:<:-theta end{cases}$$
Step 7 − Adjust the weight and bias as follows −
Case 1 − if y ≠ t then,
$$w_{i}(new):=:w_{i}(old):+:alpha:tx_{i}$$
$$b(new):=:b(old):+:alpha t$$
Case 2 − if y = t then,
$$w_{i}(new):=:w_{i}(old)$$
$$b(new):=:b(old)$$
Here ‘y’ is the actual output and ‘t’ is the desired/target output.
Step 8 − Test for the stopping condition, which would happen when there is no change in
weight.
Training Algorithm for Multiple Output Units
The following diagram is the architecture of perceptron for multiple output classes.
Step 1 − Initialize the following to start the training −
● Weights
● Bias
● Learning rate $alpha$
For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning
rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every training vector x.
Step 4 − Activate each input unit as follows −
$$x_{i}:=:s_{i}:(i:=:1:to:n)$$
Step 5 − Obtain the net input with the following relation −
$$y_{in}:=:b:+:displaystylesumlimits_{i}^n x_{i}:w_{ij}$$
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output for each output
unit j = 1 to m −
$$f(y_{in}):=:begin{cases}1 & if:y_{inj}:>:theta0 & if :
-theta:leqslant:y_{inj}:leqslant:theta-1 & if:y_{inj}:<:-theta end{cases}$$
Step 7 − Adjust the weight and bias for x = 1 to n and j = 1 to m as follows −
Case 1 − if yj ≠ tj then,
$$w_{ij}(new):=:w_{ij}(old):+:alpha:t_{j}x_{i}$$
$$b_{j}(new):=:b_{j}(old):+:alpha t_{j}$$
Case 2 − if yj = tj then,
$$w_{ij}(new):=:w_{ij}(old)$$
$$b_{j}(new):=:b_{j}(old)$$
Here ‘y’ is the actual output and ‘t’ is the desired/target output.
Step 8 − Test for the stopping condition, which will happen when there is no change in
weight.
Adaptive Linear Neuron (Adaline)
Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit. It
was developed by Widrow and Hoff in 1960. Some important points about Adaline are as
follows −
● It uses bipolar activation function.
● It uses delta rule for training to minimize the Mean-Squared Error (MSE) between the
actual output and the desired/target output.
● The weights and the bias are adjustable.
Architecture
The basic structure of Adaline is similar to perceptron having an extra feedback loop with the
help of which the actual output is compared with the desired/target output. After comparison
on the basis of training algorithm, the weights and bias will be updated.
Training Algorithm
Step 1 − Initialize the following to start the training −
● Weights
● Bias
● Learning rate $alpha$
For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning
rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every bipolar training pair s:t.
Step 4 − Activate each input unit as follows −
$$x_{i}:=:s_{i}:(i:=:1:to:n)$$
Step 5 − Obtain the net input with the following relation −
$$y_{in}:=:b:+:displaystylesumlimits_{i}^n x_{i}:w_{i}$$
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output −
$$f(y_{in}):=:begin{cases}1 & if:y_{in}:geqslant:0 -1 & if:y_{in}:<:0
end{cases}$$
Step 7 − Adjust the weight and bias as follows −
Case 1 − if y ≠ t then,
$$w_{i}(new):=:w_{i}(old):+: alpha(t:-:y_{in})x_{i}$$
$$b(new):=:b(old):+: alpha(t:-:y_{in})$$
Case 2 − if y = t then,
$$w_{i}(new):=:w_{i}(old)$$
$$b(new):=:b(old)$$
Here ‘y’ is the actual output and ‘t’ is the desired/target output.
$(t:-;y_{in})$ is the computed error.
Step 8 − Test for the stopping condition, which will happen when there is no change in
weight or the highest weight change occurred during training is smaller than the specified
tolerance.
Multiple Adaptive Linear Neuron (Madaline)
Madaline which stands for Multiple Adaptive Linear Neuron, is a network which consists of
many Adalines in parallel. It will have a single output unit. Some important points about
Madaline are as follows −
● It is just like a multilayer perceptron, where Adaline will act as a hidden unit between
the input and the Madaline layer.
● The weights and the bias between the input and Adaline layers, as in we see in the
Adaline architecture, are adjustable.
● The Adaline and Madaline layers have fixed weights and bias of 1.
● Training can be done with the help of Delta rule.
Architecture
The architecture of Madaline consists of “n” neurons of the input layer, “m” neurons of the
Adaline layer, and 1 neuron of the Madaline layer. The Adaline layer can be considered as the
hidden layer as it is between the input layer and the output layer, i.e. the Madaline layer.
Training Algorithm
By now we know that only the weights and bias between the input and the Adaline layer are
to be adjusted, and the weights and bias between the Adaline and the Madaline layer are
fixed.
Step 1 − Initialize the following to start the training −
● Weights
● Bias
● Learning rate $alpha$
For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning
rate must be set equal to 1.
Step 2 − Continue step 3-8 when the stopping condition is not true.
Step 3 − Continue step 4-6 for every bipolar training pair s:t.
Step 4 − Activate each input unit as follows −
$$x_{i}:=:s_{i}:(i:=:1:to:n)$$
Step 5 − Obtain the net input at each hidden layer, i.e. the Adaline layer with the following
relation −
$$Q_{inj}:=:b_{j}:+:displaystylesumlimits_{i}^n x_{i}:w_{ij}:::j:=:1:to:m$$
Here ‘b’ is bias and ‘n’ is the total number of input neurons.
Step 6 − Apply the following activation function to obtain the final output at the Adaline and
the Madaline layer −
$$f(x):=:begin{cases}1 & if:x:geqslant:0 -1 & if:x:<:0 end{cases}$$
Output at the hidden (Adaline) unit
$$Q_{j}:=:f(Q_{inj})$$
Final output of the network
$$y:=:f(y_{in})$$
i.e. $::y_{inj}:=:b_{0}:+:sum_{j = 1}^m:Q_{j}:v_{j}$
Step 7 − Calculate the error and adjust the weights as follows −
Case 1 − if y ≠ t and t = 1 then,
$$w_{ij}(new):=:w_{ij}(old):+: alpha(1:-:Q_{inj})x_{i}$$
$$b_{j}(new):=:b_{j}(old):+: alpha(1:-:Q_{inj})$$
In this case, the weights would be updated on Qj where the net input is close to 0 because t =
1.
Case 2 − if y ≠ t and t = -1 then,
$$w_{ik}(new):=:w_{ik}(old):+: alpha(-1:-:Q_{ink})x_{i}$$
$$b_{k}(new):=:b_{k}(old):+: alpha(-1:-:Q_{ink})$$
In this case, the weights would be updated on Qk where the net input is positive because t =
-1.
Here ‘y’ is the actual output and ‘t’ is the desired/target output.
Case 3 − if y = t then
There would be no change in weights.
Step 8 − Test for the stopping condition, which will happen when there is no change in
weight or the highest weight change occurred during training is smaller than the specified
tolerance.
Back Propagation Neural Networks
Back Propagation Neural (BPN) is a multilayer neural network consisting of the input layer,
at least one hidden layer and output layer. As its name suggests, back propagating will take
place in this network. The error which is calculated at the output layer, by comparing the
target output and the actual output, will be propagated back towards the input layer.
Architecture
As shown in the diagram, the architecture of BPN has three interconnected layers having
weights on them. The hidden layer as well as the output layer also has bias, whose weight is
always 1, on them. As is clear from the diagram, the working of BPN is in two phases. One
phase sends the signal from the input layer to the output layer, and the other phase back
propagates the error from the output layer to the input layer.
Training Algorithm
For training, BPN will use binary sigmoid activation function. The training of BPN will have
the following three phases.
● Phase 1 − Feed Forward Phase
● Phase 2 − Back Propagation of error
● Phase 3 − Updating of weights
All these steps will be concluded in the algorithm as follows
Step 1 − Initialize the following to start the training −
● Weights
● Learning rate $alpha$
For easy calculation and simplicity, take some small random values.
Step 2 − Continue step 3-11 when the stopping condition is not true.
Step 3 − Continue step 4-10 for every training pair.
Phase 1
Step 4 − Each input unit receives input signal xi and sends it to the hidden unit for all i = 1 to
n
Step 5 − Calculate the net input at the hidden unit using the following relation −
$$Q_{inj}:=:b_{0j}:+:sum_{i=1}^n x_{i}v_{ij}::::j:=:1:to:p$$
Here b0j is the bias on hidden unit, vij is the weight on j unit of the hidden layer coming from
i unit of the input layer.
Now calculate the net output by applying the following activation function
$$Q_{j}:=:f(Q_{inj})$$
Send these output signals of the hidden layer units to the output layer units.
Step 6 − Calculate the net input at the output layer unit using the following relation −
$$y_{ink}:=:b_{0k}:+:sum_{j = 1}^p:Q_{j}:w_{jk}::k:=:1:to:m$$
Here b0k ⁡
is the bias on output unit, wjk is the weight on k unit of the output layer coming from
j unit of the hidden layer.
Calculate the net output by applying the following activation function
$$y_{k}:=:f(y_{ink})$$
Phase 2
Step 7 − Compute the error correcting term, in correspondence with the target pattern
received at each output unit, as follows −
$$delta_{k}:=:(t_{k}:-:y_{k})f^{'}(y_{ink})$$
On this basis, update the weight and bias as follows −
$$Delta v_{jk}:=:alpha delta_{k}:Q_{ij}$$
$$Delta b_{0k}:=:alpha delta_{k}$$
Then, send $delta_{k}$ back to the hidden layer.
Step 8 − Now each hidden unit will be the sum of its delta inputs from the output units.
$$delta_{inj}:=:displaystylesumlimits_{k=1}^m delta_{k}:w_{jk}$$
Error term can be calculated as follows −
$$delta_{j}:=:delta_{inj}f^{'}(Q_{inj})$$
On this basis, update the weight and bias as follows −
$$Delta w_{ij}:=:alphadelta_{j}x_{i}$$
$$Delta b_{0j}:=:alphadelta_{j}$$
Phase 3
Step 9 − Each output unit (ykk = 1 to m) updates the weight and bias as follows −
$$v_{jk}(new):=:v_{jk}(old):+:Delta v_{jk}$$
$$b_{0k}(new):=:b_{0k}(old):+:Delta b_{0k}$$
Step 10 − Each output unit (zjj = 1 to p) updates the weight and bias as follows −
$$w_{ij}(new):=:w_{ij}(old):+:Delta w_{ij}$$
$$b_{0j}(new):=:b_{0j}(old):+:Delta b_{0j}$$
Step 11 − Check for the stopping condition, which may be either the number of epochs
reached or the target output matches the actual output.
Generalized Delta Learning Rule
Delta rule works only for the output layer. On the other hand, generalized delta rule, also
called as back-propagation rule, is a way of creating the desired values of the hidden layer.
Mathematical Formulation
For the activation function $y_{k}:=:f(y_{ink})$ the derivation of net input on Hidden layer
as well as on output layer can be given by
$$y_{ink}:=:displaystylesumlimits_i:z_{i}w_{jk}$$
And $::y_{inj}:=:sum_i x_{i}v_{ij}$
Now the error which has to be minimized is
$$E:=:frac{1}{2}displaystylesumlimits_{k}:[t_{k}:-:y_{k}]^2$$
By using the chain rule, we have
$$frac{partial E}{partial w_{jk}}:=:frac{partial }{partial
w_{jk}}(frac{1}{2}displaystylesumlimits_{k}:[t_{k}:-:y_{k}]^2)$$
$$=:frac{partial }{partial w_{jk}}lgroupfrac{1}{2}[t_{k}:-:t(y_{ink})]^2rgroup$$
$$=:-[t_{k}:-:y_{k}]frac{partial }{partial w_{jk}}f(y_{ink})$$
$$=:-[t_{k}:-:y_{k}]f(y_{ink})frac{partial }{partial w_{jk}}(y_{ink})$$
$$=:-[t_{k}:-:y_{k}]f^{'}(y_{ink})z_{j}$$
Now let us say $delta_{k}:=:-[t_{k}:-:y_{k}]f^{'}(y_{ink})$
The weights on connections to the hidden unit zj can be given by −
$$frac{partial E}{partial v_{ij}}:=:- displaystylesumlimits_{k} delta_{k}frac{partial
}{partial v_{ij}}:(y_{ink})$$
Putting the value of $y_{ink}$ we will get the following
$$delta_{j}:=:-displaystylesumlimits_{k}delta_{k}w_{jk}f^{'}(z_{inj})$$
Weight updating can be done as follows −
For the output unit −
$$Delta w_{jk}:=:-alphafrac{partial E}{partial w_{jk}}$$
$$=:alpha:delta_{k}:z_{j}$$
For the hidden unit −
$$Delta v_{ij}:=:-alphafrac{partial E}{partial v_{ij}}$$
$$=:alpha:delta_{j}:x_{i}$$
hese kinds of neural networks work on the basis of pattern association, which means they can
store different patterns and at the time of giving an output they can produce one of the stored
patterns by matching them with the given input pattern. These types of memories are also
called Content-Addressable Memory CAM
. Associative memory makes a parallel search with the stored patterns as data files.
Following are the two types of associative memories we can observe −
● Auto Associative Memory
● Hetero Associative memory
Auto Associative Memory
This is a single layer neural network in which the input training vector and the output target
vectors are the same. The weights are determined so that the network stores a set of patterns.
Architecture
As shown in the following figure, the architecture of Auto Associative memory network has
‘n’ number of input training vectors and similar ‘n’ number of output target vectors.
Training Algorithm
For training, this network is using the Hebb or Delta learning rule.
Step 1 − Initialize all the weights to zero as wij = 0 i=1ton,j=1ton
Step 2 − Perform steps 3-4 for each input vector.
Step 3 − Activate each input unit as follows −
xi=si(i=1ton)
Step 4 − Activate each output unit as follows −
yj=sj(j=1ton)
Step 5 − Adjust the weights as follows −
wij(new)=wij(old)+xiyj
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 2 − Perform steps 3-5 for each input vector.
Step 3 − Set the activation of the input units equal to that of the input vector.
Step 4 − Calculate the net input to each output unit j = 1 to n
yinj=∑i=1nxiwij
Step 5 − Apply the following activation function to calculate the output
yj=f(yinj)={+1−1ifyinj>0ifyinj⩽0
Hetero Associative memory
Similar to Auto Associative Memory network, this is also a single layer neural network.
However, in this network the input training vector and the output target vectors are not the
same. The weights are determined so that the network stores a set of patterns. Hetero
associative network is static in nature, hence, there would be no non-linear and delay
operations.
Architecture
As shown in the following figure, the architecture of Hetero Associative Memory network
has ‘n’ number of input training vectors and ‘m’ number of output target vectors.
Training Algorithm
For training, this network is using the Hebb or Delta learning rule.
Step 1 − Initialize all the weights to zero as wij = 0 i=1ton,j=1tom
Step 2 − Perform steps 3-4 for each input vector.
Step 3 − Activate each input unit as follows −
xi=si(i=1ton)
Step 4 − Activate each output unit as follows −
yj=sj(j=1tom)
Step 5 − Adjust the weights as follows −
wij(new)=wij(old)+xiyj
Testing Algorithm
Step 1 − Set the weights obtained during training for Hebb’s rule.
Step 2 − Perform steps 3-5 for each input vector.
Step 3 − Set the activation of the input units equal to that of the input vector.
Step 4 − Calculate the net input to each output unit j = 1 to m;
yinj=∑i=1nxiwij
Step 5 − Apply the following activation function to calculate the output
yj=f(yinj)=⎧⎩⎨⎪⎪+10−1ifyinj>0ifyinj=0ifyinj<0
Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists of a single
layer which contains one or more fully connected recurrent neurons. The Hopfield network is
commonly used for auto-association and optimization tasks.
Discrete Hopfield Network
A Hopfield network which operates in a discrete line fashion or in other words, it can be said
the input and output patterns are discrete vector, which can be either binary 0,1
or bipolar +1,−1
in nature. The network has symmetrical weights with no self-connections i.e., wij = wji and wii
= 0.
Architecture
Following are some important points to keep in mind about discrete Hopfield network −
● This model consists of neurons with one inverting and one non-inverting output.
● The output of each neuron should be the input of other neurons but not the input of
self.
● Weight/connection strength is represented by wij.
● Connections can be excitatory as well as inhibitory. It would be excitatory, if the
output of the neuron is same as the input, otherwise inhibitory.
● Weights should be symmetrical, i.e. wij = wji
The output from Y1 going to Y2, Yi and Yn have the weights w12, w1i and w1n respectively.
Similarly, other arcs have the weights on them.
Training Algorithm
During training of discrete Hopfield network, weights will be updated. As we know that we
can have the binary input vectors as well as bipolar input vectors. Hence, in both the cases,
weight updates can be done with the following relation
Case 1 − Binary input patterns
For a set of binary patterns sp
, p = 1 to P
Here, sp
= s1p, s2p,..., sip,..., snp
Weight Matrix is given by
wij=∑p=1P[2si(p)−1][2sj(p)−1]fori≠j
Case 2 − Bipolar input patterns
For a set of binary patterns sp
, p = 1 to P
Here, sp
= s1p, s2p,..., sip,..., snp
Weight Matrix is given by
wij=∑p=1P[si(p)][sj(p)]fori≠j
Testing Algorithm
Step 1 − Initialize the weights, which are obtained from training algorithm by using Hebbian
principle.
Step 2 − Perform steps 3-9, if the activations of the network is not consolidated.
Step 3 − For each input vector X, perform steps 4-8.
Step 4 − Make initial activation of the network equal to the external input vector X as follows
−
yi=xifori=1ton
Step 5 − For each unit Yi, perform steps 6-9.
Step 6 − Calculate the net input of the network as follows −
yini=xi+∑jyjwji
Step 7 − Apply the activation as follows over the net input to calculate the output −
yi=⎧⎩⎨1yi0ifyini>θiifyini=θiifyini<θi
Here θi
is the threshold.
Step 8 − Broadcast this output yi to all other units.
Step 9 − Test the network for conjunction.
Energy Function Evaluation
An energy function is defined as a function that is bonded and non-increasing function of the
state of the system.
Energy function Ef⁡
, ⁡
also called Lyapunov function determines the stability of discrete
Hopfield network, and is characterized as follows −
Ef=−12∑i=1n∑j=1nyiyjwij−∑i=1nxiyi+∑i=1nθiyi
Condition − In a stable network, whenever the state of node changes, the above energy
function will decrease.
Suppose when node i has changed state from y(k)i
to y(k+1)i ⁡
then the Energy change ΔEf
is given by the following relation
ΔEf=Ef(y(k+1)i)−Ef(y(k)i)
=−(∑j=1nwijy(k)i+xi−θi)(y(k+1)i−y(k)i)
=−(neti)Δyi
Here Δyi=y(k+1)i−y(k)i
The change in energy depends on the fact that only one unit can update its activation at a
time.

More Related Content

Similar to Neural networks are parallel computing devices.docx.pdf (20)

DOCX
ABSTRACT.docxiyhkkkkkkkkkkkkkkkkkkkkkkkkkkkk
suriyakalavinoth
 
PDF
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
Guru Nanak Technical Institutions
 
PPT
Neuralnetwork 101222074552-phpapp02
Deepu Gupta
 
DOC
Neural network
Santhosh Gowda
 
PPTX
Sppu engineering artificial intelligence and data science semester 6th Artif...
pawaletrupti434
 
PPTX
02 Fundamental Concepts of ANN
Tamer Ahmed Farrag, PhD
 
PDF
Machine learningiwijshdbebhehehshshsj.pdf
arewho557
 
PDF
A04401001013
ijceronline
 
PPTX
Artificial Neural Network
Prakash K
 
PDF
What are neural networks.pdf
AnastasiaSteele10
 
PDF
What are neural networks.pdf
StephenAmell4
 
PDF
What are neural networks.pdf
StephenAmell4
 
PPTX
neural networks
joshiblog
 
DOC
Neural network and fuzzy logic
Lakshmi Sarveypalli
 
DOCX
Artifical neural networks
alldesign
 
PPTX
Artificial Neural Network in Medical Diagnosis
Adityendra Kumar Singh
 
PDF
Deep Learning detailkesdSECA4002 doc.pdf
Gayatri Wahane
 
PDF
M.Sc_CengineeringS_II_Soft_Computing_PCSC401.pdf
159997111005
 
PPTX
ANN sgjjjjkjhhjkkjjgjkgjhgkjgjjgjjjhjghh
ReehaamMalikArain
 
PPT
Neural Networks
NikitaRuhela
 
ABSTRACT.docxiyhkkkkkkkkkkkkkkkkkkkkkkkkkkkk
suriyakalavinoth
 
CCS355 Neural Networks & Deep Learning Unit 1 PDF notes with Question bank .pdf
Guru Nanak Technical Institutions
 
Neuralnetwork 101222074552-phpapp02
Deepu Gupta
 
Neural network
Santhosh Gowda
 
Sppu engineering artificial intelligence and data science semester 6th Artif...
pawaletrupti434
 
02 Fundamental Concepts of ANN
Tamer Ahmed Farrag, PhD
 
Machine learningiwijshdbebhehehshshsj.pdf
arewho557
 
A04401001013
ijceronline
 
Artificial Neural Network
Prakash K
 
What are neural networks.pdf
AnastasiaSteele10
 
What are neural networks.pdf
StephenAmell4
 
What are neural networks.pdf
StephenAmell4
 
neural networks
joshiblog
 
Neural network and fuzzy logic
Lakshmi Sarveypalli
 
Artifical neural networks
alldesign
 
Artificial Neural Network in Medical Diagnosis
Adityendra Kumar Singh
 
Deep Learning detailkesdSECA4002 doc.pdf
Gayatri Wahane
 
M.Sc_CengineeringS_II_Soft_Computing_PCSC401.pdf
159997111005
 
ANN sgjjjjkjhhjkkjjgjkgjhgkjgjjgjjjhjghh
ReehaamMalikArain
 
Neural Networks
NikitaRuhela
 

More from neelamsanjeevkumar (20)

PPTX
Stem Pyramid Analysis Exploring the Necessity of Context in Art.pptx
neelamsanjeevkumar
 
PPTX
mind map is true picture of parts explanation on ontology
neelamsanjeevkumar
 
PDF
Engineering throughout history sujithra.pdf
neelamsanjeevkumar
 
PDF
01 - Genesis of IoT.pdfgenesis of iot for engineering
neelamsanjeevkumar
 
PDF
VERY NICE FOR CSE 3RD YEAR AND IOT STUDENTS
neelamsanjeevkumar
 
PPT
how to make a very good journal paper for
neelamsanjeevkumar
 
PDF
all syllabus of second year cse departmentcse department syllabus.pdf
neelamsanjeevkumar
 
PPTX
HILL CLIMBING FOR ELECTRONICS AND COMMUNICATION ENG
neelamsanjeevkumar
 
PPTX
simulated aneeleaning in artificial intelligence .pptx
neelamsanjeevkumar
 
PPTX
Feed forward back propogation algorithm .pptx
neelamsanjeevkumar
 
PPTX
IOT Unit 3 for engineering second year .pptx
neelamsanjeevkumar
 
PPT
Genetic-Algorithms forv artificial .ppt
neelamsanjeevkumar
 
PPT
Genetic_Algorithms_genetic for_data .ppt
neelamsanjeevkumar
 
PPT
Genetic-Algorithms for machine learning and ai.ppt
neelamsanjeevkumar
 
PPT
Stepwise Selection Choosing the Optimal Model .ppt
neelamsanjeevkumar
 
PDF
the connection of iot with lora pan which enable
neelamsanjeevkumar
 
PDF
what is lorapan ,explanation of iot module with
neelamsanjeevkumar
 
DOCX
What is First Order Logic in AI or FOL in AI.docx
neelamsanjeevkumar
 
PPTX
unit2_mental objects pruning and game theory .pptx
neelamsanjeevkumar
 
PPTX
2_RaspberryPi presentation.pptx
neelamsanjeevkumar
 
Stem Pyramid Analysis Exploring the Necessity of Context in Art.pptx
neelamsanjeevkumar
 
mind map is true picture of parts explanation on ontology
neelamsanjeevkumar
 
Engineering throughout history sujithra.pdf
neelamsanjeevkumar
 
01 - Genesis of IoT.pdfgenesis of iot for engineering
neelamsanjeevkumar
 
VERY NICE FOR CSE 3RD YEAR AND IOT STUDENTS
neelamsanjeevkumar
 
how to make a very good journal paper for
neelamsanjeevkumar
 
all syllabus of second year cse departmentcse department syllabus.pdf
neelamsanjeevkumar
 
HILL CLIMBING FOR ELECTRONICS AND COMMUNICATION ENG
neelamsanjeevkumar
 
simulated aneeleaning in artificial intelligence .pptx
neelamsanjeevkumar
 
Feed forward back propogation algorithm .pptx
neelamsanjeevkumar
 
IOT Unit 3 for engineering second year .pptx
neelamsanjeevkumar
 
Genetic-Algorithms forv artificial .ppt
neelamsanjeevkumar
 
Genetic_Algorithms_genetic for_data .ppt
neelamsanjeevkumar
 
Genetic-Algorithms for machine learning and ai.ppt
neelamsanjeevkumar
 
Stepwise Selection Choosing the Optimal Model .ppt
neelamsanjeevkumar
 
the connection of iot with lora pan which enable
neelamsanjeevkumar
 
what is lorapan ,explanation of iot module with
neelamsanjeevkumar
 
What is First Order Logic in AI or FOL in AI.docx
neelamsanjeevkumar
 
unit2_mental objects pruning and game theory .pptx
neelamsanjeevkumar
 
2_RaspberryPi presentation.pptx
neelamsanjeevkumar
 
Ad

Recently uploaded (20)

PDF
Natural Language processing and web deigning notes
AnithaSakthivel3
 
PDF
Jual GPS Geodetik CHCNAV i93 IMU-RTK Lanjutan dengan Survei Visual
Budi Minds
 
PDF
CFM 56-7B - Engine General Familiarization. PDF
Gianluca Foro
 
PPTX
ENG8 Q1, WEEK 4.pptxoooiioooooooooooooooooooooooooo
chubbychubz1
 
PDF
Geothermal Heat Pump ppt-SHRESTH S KOKNE
SHRESTHKOKNE
 
PPT
IISM Presentation.ppt Construction safety
lovingrkn
 
PDF
PRIZ Academy - Change Flow Thinking Master Change with Confidence.pdf
PRIZ Guru
 
PDF
Natural Language processing and web deigning notes
AnithaSakthivel3
 
PDF
LEARNING CROSS-LINGUAL WORD EMBEDDINGS WITH UNIVERSAL CONCEPTS
kjim477n
 
PPTX
Sensor IC System Design Using COMSOL Multiphysics 2025-July.pptx
James D.B. Wang, PhD
 
PPTX
ENSA_Module_8.pptx_nice_ipsec_presentation
RanaMukherjee24
 
PDF
MOBILE AND WEB BASED REMOTE BUSINESS MONITORING SYSTEM
ijait
 
PDF
th International conference on Big Data, Machine learning and Applications (B...
Zac Darcy
 
PPTX
GitHub_Copilot_Basics...........................pptx
ssusera13041
 
PPTX
Cyclic_Redundancy_Check_Presentation.pptx
alhjranyblalhmwdbdal
 
PDF
4 Tier Teamcenter Installation part1.pdf
VnyKumar1
 
PDF
The Complete Guide to the Role of the Fourth Engineer On Ships
Mahmoud Moghtaderi
 
PPTX
ETP Presentation(1000m3 Small ETP For Power Plant and industry
MD Azharul Islam
 
PPTX
filteration _ pre.pptx 11111110001.pptx
awasthivaibhav825
 
PPTX
MULTI LEVEL DATA TRACKING USING COOJA.pptx
dollysharma12ab
 
Natural Language processing and web deigning notes
AnithaSakthivel3
 
Jual GPS Geodetik CHCNAV i93 IMU-RTK Lanjutan dengan Survei Visual
Budi Minds
 
CFM 56-7B - Engine General Familiarization. PDF
Gianluca Foro
 
ENG8 Q1, WEEK 4.pptxoooiioooooooooooooooooooooooooo
chubbychubz1
 
Geothermal Heat Pump ppt-SHRESTH S KOKNE
SHRESTHKOKNE
 
IISM Presentation.ppt Construction safety
lovingrkn
 
PRIZ Academy - Change Flow Thinking Master Change with Confidence.pdf
PRIZ Guru
 
Natural Language processing and web deigning notes
AnithaSakthivel3
 
LEARNING CROSS-LINGUAL WORD EMBEDDINGS WITH UNIVERSAL CONCEPTS
kjim477n
 
Sensor IC System Design Using COMSOL Multiphysics 2025-July.pptx
James D.B. Wang, PhD
 
ENSA_Module_8.pptx_nice_ipsec_presentation
RanaMukherjee24
 
MOBILE AND WEB BASED REMOTE BUSINESS MONITORING SYSTEM
ijait
 
th International conference on Big Data, Machine learning and Applications (B...
Zac Darcy
 
GitHub_Copilot_Basics...........................pptx
ssusera13041
 
Cyclic_Redundancy_Check_Presentation.pptx
alhjranyblalhmwdbdal
 
4 Tier Teamcenter Installation part1.pdf
VnyKumar1
 
The Complete Guide to the Role of the Fourth Engineer On Ships
Mahmoud Moghtaderi
 
ETP Presentation(1000m3 Small ETP For Power Plant and industry
MD Azharul Islam
 
filteration _ pre.pptx 11111110001.pptx
awasthivaibhav825
 
MULTI LEVEL DATA TRACKING USING COOJA.pptx
dollysharma12ab
 
Ad

Neural networks are parallel computing devices.docx.pdf

  • 1. Neural networks are parallel computing devices, which is basically an attempt to make a computer model of the brain. The main objective is to develop a system to perform various computational tasks faster than the traditional systems. These tasks include pattern recognition and classification, approximation, optimization, and data clustering. What is Artificial Neural Network? Artificial Neural Network (ANN) is an efficient computing system whose central theme is borrowed from the analogy of biological neural networks. ANNs are also named as “artificial neural systems,” or “parallel distributed processing systems,” or “connectionist systems.” ANN acquires a large collection of units that are interconnected in some pattern to allow communication between the units. These units, also referred to as nodes or neurons, are simple processors which operate in parallel. Every neuron is connected with other neuron through a connection link. Each connection link is associated with a weight that has information about the input signal. This is the most useful information for neurons to solve a particular problem because the weight usually excites or inhibits the signal that is being communicated. Each neuron has an internal state, which is called an activation signal. Output signals, which are produced after combining the input signals and activation rule, may be sent to other units. A Brief History of ANN The history of ANN can be divided into the following three eras − ANN during 1940s to 1960s Some key developments of this era are as follows − ● 1943 − It has been assumed that the concept of neural network started with the work of physiologist, Warren McCulloch, and mathematician, Walter Pitts, when in 1943 they modeled a simple neural network using electrical circuits in order to describe how neurons in the brain might work. ● 1949 − Donald Hebb’s book, The Organization of Behavior, put forth the fact that repeated activation of one neuron by another increases its strength each time they are used. ● 1956 − An associative memory network was introduced by Taylor. ● 1958 − A learning method for McCulloch and Pitts neuron model named Perceptron was invented by Rosenblatt. ● 1960 − Bernard Widrow and Marcian Hoff developed models called "ADALINE" and “MADALINE.” ANN during 1960s to 1980s Some key developments of this era are as follows −
  • 2. ● 1961 − Rosenblatt made an unsuccessful attempt but proposed the “backpropagation” scheme for multilayer networks. ● 1964 − Taylor constructed a winner-take-all circuit with inhibitions among output units. ● 1969 − Multilayer perceptron (MLP) was invented by Minsky and Papert. ● 1971 − Kohonen developed Associative memories. ● 1976 − Stephen Grossberg and Gail Carpenter developed Adaptive resonance theory. ANN from 1980s till Present Some key developments of this era are as follows − ● 1982 − The major development was Hopfield’s Energy approach. ● 1985 − Boltzmann machine was developed by Ackley, Hinton, and Sejnowski. ● 1986 − Rumelhart, Hinton, and Williams introduced Generalised Delta Rule. ● 1988 − Kosko developed Binary Associative Memory (BAM) and also gave the concept of Fuzzy Logic in ANN. The historical review shows that significant progress has been made in this field. Neural network based chips are emerging and applications to complex problems are being developed. Surely, today is a period of transition for neural network technology. Biological Neuron A nerve cell (neuron) is a special biological cell that processes information. According to an estimation, there are huge number of neurons, approximately 1011 with numerous interconnections, approximately 1015 . Schematic Diagram
  • 3. Working of a Biological Neuron As shown in the above diagram, a typical neuron consists of the following four parts with the help of which we can explain its working − ● Dendrites − They are tree-like branches, responsible for receiving the information from other neurons it is connected to. In other sense, we can say that they are like the ears of neuron. ● Soma − It is the cell body of the neuron and is responsible for processing of information, they have received from dendrites. ● Axon − It is just like a cable through which neurons send the information. ● Synapses − It is the connection between the axon and other neuron dendrites. ANN versus BNN Before taking a look at the differences between Artificial Neural Network (ANN) and Biological Neural Network (BNN), let us take a look at the similarities based on the terminology between these two. Biological Neural Network (BNN) Artificial Neural Network (ANN) Soma Node Dendrites Input Synapse Weights or Interconnections Axon Output The following table shows the comparison between ANN and BNN based on some criteria mentioned.
  • 4. Criteria BNN ANN Processing Massively parallel, slow but superior than ANN Massively parallel, fast but inferior than BNN Size 1011 neurons and 1015 interconnections 102 to 104 nodes (mainly depends on the type of application and network designer) Learning They can tolerate ambiguity Very precise, structured and formatted data is required to tolerate ambiguity Fault tolerance Performance degrades with even partial damage It is capable of robust performance, hence has the potential to be fault tolerant Storage capacity Stores the information in the synapse Stores the information in continuous memory locations Model of Artificial Neural Network The following diagram represents the general model of ANN followed by its processing. For the above general model of artificial neural network, the net input can be calculated as follows − $$y_{in}:=:x_{1}.w_{1}:+:x_{2}.w_{2}:+:x_{3}.w_{3}:dotso: x_{m}.w_{m}$$ i.e., Net input $y_{in}:=:sum_i^m:x_{i}.w_{i}$ The output can be calculated by applying the activation function over the net input. $$Y:=:F(y_{in}) $$ Output = function (net input calculated)
  • 5. Network Topology A network topology is the arrangement of a network along with its nodes and connecting lines. According to the topology, ANN can be classified as the following kinds − Feedforward Network It is a non-recurrent network having processing units/nodes in layers and all the nodes in a layer are connected with the nodes of the previous layers. The connection has different weights upon them. There is no feedback loop means the signal can only flow in one direction, from input to output. It may be divided into the following two types − ● Single layer feedforward network − The concept is of feedforward ANN having only one weighted layer. In other words, we can say the input layer is fully connected to the output layer. ● Multilayer feedforward network − The concept is of feedforward ANN having more than one weighted layer. As this network has one or more layers between the input and the output layer, it is called hidden layers. Feedback Network
  • 6. As the name suggests, a feedback network has feedback paths, which means the signal can flow in both directions using loops. This makes it a non-linear dynamic system, which changes continuously until it reaches a state of equilibrium. It may be divided into the following types − ● Recurrent networks − They are feedback networks with closed loops. Following are the two types of recurrent networks. ● Fully recurrent network − It is the simplest neural network architecture because all nodes are connected to all other nodes and each node works as both input and output. ● Jordan network − It is a closed loop network in which the output will go to the input again as feedback as shown in the following diagram. Adjustments of Weights or Learning Learning, in artificial neural network, is the method of modifying the weights of connections between the neurons of a specified network. Learning in ANN can be classified into three categories namely supervised learning, unsupervised learning, and reinforcement learning. Supervised Learning
  • 7. As the name suggests, this type of learning is done under the supervision of a teacher. This learning process is dependent. During the training of ANN under supervised learning, the input vector is presented to the network, which will give an output vector. This output vector is compared with the desired output vector. An error signal is generated, if there is a difference between the actual output and the desired output vector. On the basis of this error signal, the weights are adjusted until the actual output is matched with the desired output. Unsupervised Learning As the name suggests, this type of learning is done without the supervision of a teacher. This learning process is independent. During the training of ANN under unsupervised learning, the input vectors of similar type are combined to form clusters. When a new input pattern is applied, then the neural network gives an output response indicating the class to which the input pattern belongs. There is no feedback from the environment as to what should be the desired output and if it is correct or incorrect. Hence, in this type of learning, the network itself must discover the patterns and features from the input data, and the relation for the input data over the output. Reinforcement Learning As the name suggests, this type of learning is used to reinforce or strengthen the network over some critic information. This learning process is similar to supervised learning, however we might have very less information.
  • 8. During the training of network under reinforcement learning, the network receives some feedback from the environment. This makes it somewhat similar to supervised learning. However, the feedback obtained here is evaluative not instructive, which means there is no teacher as in supervised learning. After receiving the feedback, the network performs adjustments of the weights to get better critic information in future. Activation Functions It may be defined as the extra force or effort applied over the input to obtain an exact output. In ANN, we can also apply activation functions over the input to get the exact output. Followings are some activation functions of interest − Linear Activation Function It is also called the identity function as it performs no input editing. It can be defined as − $$F(x):=:x$$ Sigmoid Activation Function It is of two type as follows − ● Binary sigmoidal function − This activation function performs input editing between 0 and 1. It is positive in nature. It is always bounded, which means its output cannot be less than 0 and more than 1. It is also strictly increasing in nature, which means more the input higher would be the output. It can be defined as $$F(x):=:sigm(x):=:frac{1}{1:+:exp(-x)}$$ ● Bipolar sigmoidal function − This activation function performs input editing between -1 and 1. It can be positive or negative in nature. It is always bounded, which means its output cannot be less than -1 and more than 1. It is also strictly increasing in nature like sigmoid function. It can be defined as
  • 9. $$F(x):=:sigm(x):=:frac{2}{1:+:exp(-x)}:-:1:=:frac{1:-:exp(x)}{1:+:exp(x )}$$ As stated earlier, ANN is completely inspired by the way biological nervous system, i.e. the human brain works. The most impressive characteristic of the human brain is to learn, hence the same feature is acquired by ANN. What Is Learning in ANN? Basically, learning means to do and adapt the change in itself as and when there is a change in environment. ANN is a complex system or more precisely we can say that it is a complex adaptive system, which can change its internal structure based on the information passing through it. Why Is It important? Being a complex adaptive system, learning in ANN implies that a processing unit is capable of changing its input/output behavior due to the change in environment. The importance of learning in ANN increases because of the fixed activation function as well as the input/output vector, when a particular network is constructed. Now to change the input/output behavior, we need to adjust the weights. Classification It may be defined as the process of learning to distinguish the data of samples into different classes by finding common features between the samples of the same classes. For example, to perform training of ANN, we have some training samples with unique features, and to perform its testing we have some testing samples with other unique features. Classification is an example of supervised learning. Neural Network Learning Rules We know that, during ANN learning, to change the input/output behavior, we need to adjust the weights. Hence, a method is required with the help of which the weights can be modified. These methods are called Learning rules, which are simply algorithms or equations. Following are some learning rules for the neural network − Hebbian Learning Rule This rule, one of the oldest and simplest, was introduced by Donald Hebb in his book The Organization of Behavior in 1949. It is a kind of feed-forward, unsupervised learning. Basic Concept − This rule is based on a proposal given by Hebb, who wrote − “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.” From the above postulate, we can conclude that the connections between two neurons might be strengthened if the neurons fire at the same time and might weaken if they fire at different times. Mathematical Formulation − According to Hebbian learning rule, following is the formula to increase the weight of connection at every time step.
  • 10. $$Delta w_{ji}(t):=:alpha x_{i}(t).y_{j}(t)$$ Here, $Delta w_{ji}(t)$ ⁡ = increment by which the weight of connection increases at time step t $alpha$ = the positive and constant learning rate $x_{i}(t)$ = the input value from pre-synaptic neuron at time step t $y_{i}(t)$ = the output of pre-synaptic neuron at same time step t Perceptron Learning Rule This rule is an error correcting the supervised learning algorithm of single layer feedforward networks with linear activation function, introduced by Rosenblatt. Basic Concept − As being supervised in nature, to calculate the error, there would be a comparison between the desired/target output and the actual output. If there is any difference found, then a change must be made to the weights of connection. Mathematical Formulation − To explain its mathematical formulation, suppose we have ‘n’ number of finite input vectors, x(n), along with its desired/target output vector t(n), where n = 1 to N. Now the output ‘y’ can be calculated, as explained earlier on the basis of the net input, and activation function being applied over that net input can be expressed as follows − $$y:=:f(y_{in}):=:begin{cases}1, & y_{in}:>:theta 0, & y_{in}:leqslant:thetaend{cases}$$ Where θ is threshold. The updating of weight can be done in the following two cases − Case I − when t ≠ y, then $$w(new):=:w(old):+;tx$$ Case II − when t = y, then No change in weight Delta Learning Rule (Widrow-Hoff Rule) It is introduced by Bernard Widrow and Marcian Hoff, also called Least Mean Square (LMS) method, to minimize the error over all training patterns. It is kind of supervised learning algorithm with having continuous activation function. Basic Concept − The base of this rule is gradient-descent approach, which continues forever. Delta rule updates the synaptic weights so as to minimize the net input to the output unit and the target value. Mathematical Formulation − To update the synaptic weights, delta rule is given by $$Delta w_{i}:=:alpha:.x_{i}.e_{j}$$ Here $Delta w_{i}$ = weight change for ith ⁡ pattern; $alpha$ = the positive and constant learning rate;
  • 11. $x_{i}$ = the input value from pre-synaptic neuron; $e_{j}$ = $(t:-:y_{in})$, the difference between the desired/target output and the actual output ⁡ $y_{in}$ The above delta rule is for a single output unit only. The updating of weight can be done in the following two cases − Case-I − when t ≠ y, then $$w(new):=:w(old):+:Delta w$$ Case-II − when t = y, then No change in weight As the name suggests, supervised learning takes place under the supervision of a teacher. This learning process is dependent. During the training of ANN under supervised learning, the input vector is presented to the network, which will produce an output vector. This output vector is compared with the desired/target output vector. An error signal is generated if there is a difference between the actual output and the desired/target output vector. On the basis of this error signal, the weights would be adjusted until the actual output is matched with the desired output. Perceptron Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is the basic operational unit of artificial neural networks. It employs supervised learning rule and is able to classify the data into two classes. Operational characteristics of the perceptron: It consists of a single neuron with an arbitrary number of inputs along with adjustable weights, but the output of the neuron is 1 or 0 depending upon the threshold. It also consists of a bias whose weight is always 1. Following figure gives a schematic representation of the perceptron.
  • 12. Perceptron thus has the following three basic elements − ● Links − It would have a set of connection links, which carries a weight including a bias always having weight 1. ● Adder − It adds the input after they are multiplied with their respective weights. ● Activation function − It limits the output of neuron. The most basic activation function is a Heaviside step function that has two possible outputs. This function returns 1, if the input is positive, and 0 for any negative input. Training Algorithm Perceptron network can be trained for single output unit as well as multiple output units. Training Algorithm for Single Output Unit Step 1 − Initialize the following to start the training − ● Weights ● Bias ● Learning rate $alpha$ For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows − $$x_{i}:=:s_{i}:(i:=:1:to:n)$$ Step 5 − Now obtain the net input with the following relation − $$y_{in}:=:b:+:displaystylesumlimits_{i}^n x_{i}.:w_{i}$$ Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output. $$f(y_{in}):=:begin{cases}1 & if:y_{in}:>:theta0 & if : -theta:leqslant:y_{in}:leqslant:theta-1 & if:y_{in}:<:-theta end{cases}$$ Step 7 − Adjust the weight and bias as follows − Case 1 − if y ≠ t then, $$w_{i}(new):=:w_{i}(old):+:alpha:tx_{i}$$ $$b(new):=:b(old):+:alpha t$$
  • 13. Case 2 − if y = t then, $$w_{i}(new):=:w_{i}(old)$$ $$b(new):=:b(old)$$ Here ‘y’ is the actual output and ‘t’ is the desired/target output. Step 8 − Test for the stopping condition, which would happen when there is no change in weight. Training Algorithm for Multiple Output Units The following diagram is the architecture of perceptron for multiple output classes. Step 1 − Initialize the following to start the training − ● Weights ● Bias ● Learning rate $alpha$ For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every training vector x. Step 4 − Activate each input unit as follows −
  • 14. $$x_{i}:=:s_{i}:(i:=:1:to:n)$$ Step 5 − Obtain the net input with the following relation − $$y_{in}:=:b:+:displaystylesumlimits_{i}^n x_{i}:w_{ij}$$ Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output for each output unit j = 1 to m − $$f(y_{in}):=:begin{cases}1 & if:y_{inj}:>:theta0 & if : -theta:leqslant:y_{inj}:leqslant:theta-1 & if:y_{inj}:<:-theta end{cases}$$ Step 7 − Adjust the weight and bias for x = 1 to n and j = 1 to m as follows − Case 1 − if yj ≠ tj then, $$w_{ij}(new):=:w_{ij}(old):+:alpha:t_{j}x_{i}$$ $$b_{j}(new):=:b_{j}(old):+:alpha t_{j}$$ Case 2 − if yj = tj then, $$w_{ij}(new):=:w_{ij}(old)$$ $$b_{j}(new):=:b_{j}(old)$$ Here ‘y’ is the actual output and ‘t’ is the desired/target output. Step 8 − Test for the stopping condition, which will happen when there is no change in weight. Adaptive Linear Neuron (Adaline) Adaline which stands for Adaptive Linear Neuron, is a network having a single linear unit. It was developed by Widrow and Hoff in 1960. Some important points about Adaline are as follows − ● It uses bipolar activation function. ● It uses delta rule for training to minimize the Mean-Squared Error (MSE) between the actual output and the desired/target output. ● The weights and the bias are adjustable. Architecture The basic structure of Adaline is similar to perceptron having an extra feedback loop with the help of which the actual output is compared with the desired/target output. After comparison on the basis of training algorithm, the weights and bias will be updated.
  • 15. Training Algorithm Step 1 − Initialize the following to start the training − ● Weights ● Bias ● Learning rate $alpha$ For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every bipolar training pair s:t. Step 4 − Activate each input unit as follows − $$x_{i}:=:s_{i}:(i:=:1:to:n)$$ Step 5 − Obtain the net input with the following relation − $$y_{in}:=:b:+:displaystylesumlimits_{i}^n x_{i}:w_{i}$$ Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output − $$f(y_{in}):=:begin{cases}1 & if:y_{in}:geqslant:0 -1 & if:y_{in}:<:0 end{cases}$$ Step 7 − Adjust the weight and bias as follows −
  • 16. Case 1 − if y ≠ t then, $$w_{i}(new):=:w_{i}(old):+: alpha(t:-:y_{in})x_{i}$$ $$b(new):=:b(old):+: alpha(t:-:y_{in})$$ Case 2 − if y = t then, $$w_{i}(new):=:w_{i}(old)$$ $$b(new):=:b(old)$$ Here ‘y’ is the actual output and ‘t’ is the desired/target output. $(t:-;y_{in})$ is the computed error. Step 8 − Test for the stopping condition, which will happen when there is no change in weight or the highest weight change occurred during training is smaller than the specified tolerance. Multiple Adaptive Linear Neuron (Madaline) Madaline which stands for Multiple Adaptive Linear Neuron, is a network which consists of many Adalines in parallel. It will have a single output unit. Some important points about Madaline are as follows − ● It is just like a multilayer perceptron, where Adaline will act as a hidden unit between the input and the Madaline layer. ● The weights and the bias between the input and Adaline layers, as in we see in the Adaline architecture, are adjustable. ● The Adaline and Madaline layers have fixed weights and bias of 1. ● Training can be done with the help of Delta rule. Architecture The architecture of Madaline consists of “n” neurons of the input layer, “m” neurons of the Adaline layer, and 1 neuron of the Madaline layer. The Adaline layer can be considered as the hidden layer as it is between the input layer and the output layer, i.e. the Madaline layer.
  • 17. Training Algorithm By now we know that only the weights and bias between the input and the Adaline layer are to be adjusted, and the weights and bias between the Adaline and the Madaline layer are fixed. Step 1 − Initialize the following to start the training − ● Weights ● Bias ● Learning rate $alpha$ For easy calculation and simplicity, weights and bias must be set equal to 0 and the learning rate must be set equal to 1. Step 2 − Continue step 3-8 when the stopping condition is not true. Step 3 − Continue step 4-6 for every bipolar training pair s:t. Step 4 − Activate each input unit as follows − $$x_{i}:=:s_{i}:(i:=:1:to:n)$$ Step 5 − Obtain the net input at each hidden layer, i.e. the Adaline layer with the following relation − $$Q_{inj}:=:b_{j}:+:displaystylesumlimits_{i}^n x_{i}:w_{ij}:::j:=:1:to:m$$ Here ‘b’ is bias and ‘n’ is the total number of input neurons. Step 6 − Apply the following activation function to obtain the final output at the Adaline and the Madaline layer −
  • 18. $$f(x):=:begin{cases}1 & if:x:geqslant:0 -1 & if:x:<:0 end{cases}$$ Output at the hidden (Adaline) unit $$Q_{j}:=:f(Q_{inj})$$ Final output of the network $$y:=:f(y_{in})$$ i.e. $::y_{inj}:=:b_{0}:+:sum_{j = 1}^m:Q_{j}:v_{j}$ Step 7 − Calculate the error and adjust the weights as follows − Case 1 − if y ≠ t and t = 1 then, $$w_{ij}(new):=:w_{ij}(old):+: alpha(1:-:Q_{inj})x_{i}$$ $$b_{j}(new):=:b_{j}(old):+: alpha(1:-:Q_{inj})$$ In this case, the weights would be updated on Qj where the net input is close to 0 because t = 1. Case 2 − if y ≠ t and t = -1 then, $$w_{ik}(new):=:w_{ik}(old):+: alpha(-1:-:Q_{ink})x_{i}$$ $$b_{k}(new):=:b_{k}(old):+: alpha(-1:-:Q_{ink})$$ In this case, the weights would be updated on Qk where the net input is positive because t = -1. Here ‘y’ is the actual output and ‘t’ is the desired/target output. Case 3 − if y = t then There would be no change in weights. Step 8 − Test for the stopping condition, which will happen when there is no change in weight or the highest weight change occurred during training is smaller than the specified tolerance. Back Propagation Neural Networks Back Propagation Neural (BPN) is a multilayer neural network consisting of the input layer, at least one hidden layer and output layer. As its name suggests, back propagating will take place in this network. The error which is calculated at the output layer, by comparing the target output and the actual output, will be propagated back towards the input layer.
  • 19. Architecture As shown in the diagram, the architecture of BPN has three interconnected layers having weights on them. The hidden layer as well as the output layer also has bias, whose weight is always 1, on them. As is clear from the diagram, the working of BPN is in two phases. One phase sends the signal from the input layer to the output layer, and the other phase back propagates the error from the output layer to the input layer. Training Algorithm For training, BPN will use binary sigmoid activation function. The training of BPN will have the following three phases. ● Phase 1 − Feed Forward Phase ● Phase 2 − Back Propagation of error ● Phase 3 − Updating of weights All these steps will be concluded in the algorithm as follows Step 1 − Initialize the following to start the training − ● Weights ● Learning rate $alpha$ For easy calculation and simplicity, take some small random values.
  • 20. Step 2 − Continue step 3-11 when the stopping condition is not true. Step 3 − Continue step 4-10 for every training pair. Phase 1 Step 4 − Each input unit receives input signal xi and sends it to the hidden unit for all i = 1 to n Step 5 − Calculate the net input at the hidden unit using the following relation − $$Q_{inj}:=:b_{0j}:+:sum_{i=1}^n x_{i}v_{ij}::::j:=:1:to:p$$ Here b0j is the bias on hidden unit, vij is the weight on j unit of the hidden layer coming from i unit of the input layer. Now calculate the net output by applying the following activation function $$Q_{j}:=:f(Q_{inj})$$ Send these output signals of the hidden layer units to the output layer units. Step 6 − Calculate the net input at the output layer unit using the following relation − $$y_{ink}:=:b_{0k}:+:sum_{j = 1}^p:Q_{j}:w_{jk}::k:=:1:to:m$$ Here b0k ⁡ is the bias on output unit, wjk is the weight on k unit of the output layer coming from j unit of the hidden layer. Calculate the net output by applying the following activation function $$y_{k}:=:f(y_{ink})$$ Phase 2 Step 7 − Compute the error correcting term, in correspondence with the target pattern received at each output unit, as follows − $$delta_{k}:=:(t_{k}:-:y_{k})f^{'}(y_{ink})$$ On this basis, update the weight and bias as follows − $$Delta v_{jk}:=:alpha delta_{k}:Q_{ij}$$ $$Delta b_{0k}:=:alpha delta_{k}$$ Then, send $delta_{k}$ back to the hidden layer. Step 8 − Now each hidden unit will be the sum of its delta inputs from the output units.
  • 21. $$delta_{inj}:=:displaystylesumlimits_{k=1}^m delta_{k}:w_{jk}$$ Error term can be calculated as follows − $$delta_{j}:=:delta_{inj}f^{'}(Q_{inj})$$ On this basis, update the weight and bias as follows − $$Delta w_{ij}:=:alphadelta_{j}x_{i}$$ $$Delta b_{0j}:=:alphadelta_{j}$$ Phase 3 Step 9 − Each output unit (ykk = 1 to m) updates the weight and bias as follows − $$v_{jk}(new):=:v_{jk}(old):+:Delta v_{jk}$$ $$b_{0k}(new):=:b_{0k}(old):+:Delta b_{0k}$$ Step 10 − Each output unit (zjj = 1 to p) updates the weight and bias as follows − $$w_{ij}(new):=:w_{ij}(old):+:Delta w_{ij}$$ $$b_{0j}(new):=:b_{0j}(old):+:Delta b_{0j}$$ Step 11 − Check for the stopping condition, which may be either the number of epochs reached or the target output matches the actual output. Generalized Delta Learning Rule Delta rule works only for the output layer. On the other hand, generalized delta rule, also called as back-propagation rule, is a way of creating the desired values of the hidden layer. Mathematical Formulation For the activation function $y_{k}:=:f(y_{ink})$ the derivation of net input on Hidden layer as well as on output layer can be given by $$y_{ink}:=:displaystylesumlimits_i:z_{i}w_{jk}$$ And $::y_{inj}:=:sum_i x_{i}v_{ij}$ Now the error which has to be minimized is $$E:=:frac{1}{2}displaystylesumlimits_{k}:[t_{k}:-:y_{k}]^2$$ By using the chain rule, we have
  • 22. $$frac{partial E}{partial w_{jk}}:=:frac{partial }{partial w_{jk}}(frac{1}{2}displaystylesumlimits_{k}:[t_{k}:-:y_{k}]^2)$$ $$=:frac{partial }{partial w_{jk}}lgroupfrac{1}{2}[t_{k}:-:t(y_{ink})]^2rgroup$$ $$=:-[t_{k}:-:y_{k}]frac{partial }{partial w_{jk}}f(y_{ink})$$ $$=:-[t_{k}:-:y_{k}]f(y_{ink})frac{partial }{partial w_{jk}}(y_{ink})$$ $$=:-[t_{k}:-:y_{k}]f^{'}(y_{ink})z_{j}$$ Now let us say $delta_{k}:=:-[t_{k}:-:y_{k}]f^{'}(y_{ink})$ The weights on connections to the hidden unit zj can be given by − $$frac{partial E}{partial v_{ij}}:=:- displaystylesumlimits_{k} delta_{k}frac{partial }{partial v_{ij}}:(y_{ink})$$ Putting the value of $y_{ink}$ we will get the following $$delta_{j}:=:-displaystylesumlimits_{k}delta_{k}w_{jk}f^{'}(z_{inj})$$ Weight updating can be done as follows − For the output unit − $$Delta w_{jk}:=:-alphafrac{partial E}{partial w_{jk}}$$ $$=:alpha:delta_{k}:z_{j}$$ For the hidden unit − $$Delta v_{ij}:=:-alphafrac{partial E}{partial v_{ij}}$$ $$=:alpha:delta_{j}:x_{i}$$ hese kinds of neural networks work on the basis of pattern association, which means they can store different patterns and at the time of giving an output they can produce one of the stored patterns by matching them with the given input pattern. These types of memories are also called Content-Addressable Memory CAM . Associative memory makes a parallel search with the stored patterns as data files. Following are the two types of associative memories we can observe − ● Auto Associative Memory ● Hetero Associative memory Auto Associative Memory
  • 23. This is a single layer neural network in which the input training vector and the output target vectors are the same. The weights are determined so that the network stores a set of patterns. Architecture As shown in the following figure, the architecture of Auto Associative memory network has ‘n’ number of input training vectors and similar ‘n’ number of output target vectors. Training Algorithm For training, this network is using the Hebb or Delta learning rule. Step 1 − Initialize all the weights to zero as wij = 0 i=1ton,j=1ton Step 2 − Perform steps 3-4 for each input vector. Step 3 − Activate each input unit as follows − xi=si(i=1ton) Step 4 − Activate each output unit as follows − yj=sj(j=1ton) Step 5 − Adjust the weights as follows − wij(new)=wij(old)+xiyj Testing Algorithm Step 1 − Set the weights obtained during training for Hebb’s rule. Step 2 − Perform steps 3-5 for each input vector.
  • 24. Step 3 − Set the activation of the input units equal to that of the input vector. Step 4 − Calculate the net input to each output unit j = 1 to n yinj=∑i=1nxiwij Step 5 − Apply the following activation function to calculate the output yj=f(yinj)={+1−1ifyinj>0ifyinj⩽0 Hetero Associative memory Similar to Auto Associative Memory network, this is also a single layer neural network. However, in this network the input training vector and the output target vectors are not the same. The weights are determined so that the network stores a set of patterns. Hetero associative network is static in nature, hence, there would be no non-linear and delay operations. Architecture As shown in the following figure, the architecture of Hetero Associative Memory network has ‘n’ number of input training vectors and ‘m’ number of output target vectors. Training Algorithm For training, this network is using the Hebb or Delta learning rule. Step 1 − Initialize all the weights to zero as wij = 0 i=1ton,j=1tom Step 2 − Perform steps 3-4 for each input vector. Step 3 − Activate each input unit as follows − xi=si(i=1ton)
  • 25. Step 4 − Activate each output unit as follows − yj=sj(j=1tom) Step 5 − Adjust the weights as follows − wij(new)=wij(old)+xiyj Testing Algorithm Step 1 − Set the weights obtained during training for Hebb’s rule. Step 2 − Perform steps 3-5 for each input vector. Step 3 − Set the activation of the input units equal to that of the input vector. Step 4 − Calculate the net input to each output unit j = 1 to m; yinj=∑i=1nxiwij Step 5 − Apply the following activation function to calculate the output yj=f(yinj)=⎧⎩⎨⎪⎪+10−1ifyinj>0ifyinj=0ifyinj<0 Hopfield neural network was invented by Dr. John J. Hopfield in 1982. It consists of a single layer which contains one or more fully connected recurrent neurons. The Hopfield network is commonly used for auto-association and optimization tasks. Discrete Hopfield Network A Hopfield network which operates in a discrete line fashion or in other words, it can be said the input and output patterns are discrete vector, which can be either binary 0,1 or bipolar +1,−1 in nature. The network has symmetrical weights with no self-connections i.e., wij = wji and wii = 0. Architecture Following are some important points to keep in mind about discrete Hopfield network − ● This model consists of neurons with one inverting and one non-inverting output. ● The output of each neuron should be the input of other neurons but not the input of self. ● Weight/connection strength is represented by wij.
  • 26. ● Connections can be excitatory as well as inhibitory. It would be excitatory, if the output of the neuron is same as the input, otherwise inhibitory. ● Weights should be symmetrical, i.e. wij = wji The output from Y1 going to Y2, Yi and Yn have the weights w12, w1i and w1n respectively. Similarly, other arcs have the weights on them. Training Algorithm During training of discrete Hopfield network, weights will be updated. As we know that we can have the binary input vectors as well as bipolar input vectors. Hence, in both the cases, weight updates can be done with the following relation Case 1 − Binary input patterns For a set of binary patterns sp , p = 1 to P Here, sp = s1p, s2p,..., sip,..., snp Weight Matrix is given by
  • 27. wij=∑p=1P[2si(p)−1][2sj(p)−1]fori≠j Case 2 − Bipolar input patterns For a set of binary patterns sp , p = 1 to P Here, sp = s1p, s2p,..., sip,..., snp Weight Matrix is given by wij=∑p=1P[si(p)][sj(p)]fori≠j Testing Algorithm Step 1 − Initialize the weights, which are obtained from training algorithm by using Hebbian principle. Step 2 − Perform steps 3-9, if the activations of the network is not consolidated. Step 3 − For each input vector X, perform steps 4-8. Step 4 − Make initial activation of the network equal to the external input vector X as follows − yi=xifori=1ton Step 5 − For each unit Yi, perform steps 6-9. Step 6 − Calculate the net input of the network as follows − yini=xi+∑jyjwji Step 7 − Apply the activation as follows over the net input to calculate the output − yi=⎧⎩⎨1yi0ifyini>θiifyini=θiifyini<θi Here θi is the threshold. Step 8 − Broadcast this output yi to all other units. Step 9 − Test the network for conjunction.
  • 28. Energy Function Evaluation An energy function is defined as a function that is bonded and non-increasing function of the state of the system. Energy function Ef⁡ , ⁡ also called Lyapunov function determines the stability of discrete Hopfield network, and is characterized as follows − Ef=−12∑i=1n∑j=1nyiyjwij−∑i=1nxiyi+∑i=1nθiyi Condition − In a stable network, whenever the state of node changes, the above energy function will decrease. Suppose when node i has changed state from y(k)i to y(k+1)i ⁡ then the Energy change ΔEf is given by the following relation ΔEf=Ef(y(k+1)i)−Ef(y(k)i) =−(∑j=1nwijy(k)i+xi−θi)(y(k+1)i−y(k)i) =−(neti)Δyi Here Δyi=y(k+1)i−y(k)i The change in energy depends on the fact that only one unit can update its activation at a time.