0% found this document useful (0 votes)
23 views

Unit 4

This document discusses various types of neural networks including feedback neural networks, autoassociative feedback networks, pattern storage networks, Hopfield networks, competitive learning neural networks, feature mapping networks, and associative memory networks. It describes the basic architecture and learning mechanisms of each type of network. For example, it explains that feedback neural networks use nonlinear output functions to allow for pattern storage, Hopfield networks are fully connected with symmetric weights, and competitive learning networks have competition among output nodes.

Uploaded by

Poorna
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

Unit 4

This document discusses various types of neural networks including feedback neural networks, autoassociative feedback networks, pattern storage networks, Hopfield networks, competitive learning neural networks, feature mapping networks, and associative memory networks. It describes the basic architecture and learning mechanisms of each type of network. For example, it explains that feedback neural networks use nonlinear output functions to allow for pattern storage, Hopfield networks are fully connected with symmetric weights, and competitive learning networks have competition among output nodes.

Uploaded by

Poorna
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Neural Networks and Deep Learning

UNIT – 4
Feed Back Neural Networks

Dr. D. SUDHEER
Assistant Professor
Computer Science and Engineering
VNRVJIET
© Dr. Devulapalli Sudheer 1
Feedback Neural Networks

• feedback network consists of a set of processing units, the output of


each unit is fed as input to all other units including the same unit.
• The simplest one is an autoassociation task, which can be performed
by a feedback network consisting of linear processing units.
• If the input is noisy, it comes out as noisy output, thus giving an
error in recall even with optimal setting of weights. Therefore a linear
autoassociative network does not have any practical use.
• By using a nonlinear output function for each processing unit, a
feedback network can be used for pattern storage.
Auto associative feedback network
• The linear autoassociation task can also be realized by a single
layer feedback network with linear processing units

• condition for autoassociation, namely, Wal = al, is satisified if W


= I, an identity matrix.
• This is due to lack of accretive behaviour during recall, and such
a feedback network is not useful for storing information. It is
possible to make a feedback network useful, especially for pattern
storage, if the linear processing units are replaced with processing
units having nonlinear output functions.
Analysis of Pattern Storage
Networks
• The objective in a pattern storage task is to store a given set of
patterns, so that any of them can be recalled exactly when an
approximate version of the corresponding pattern is presented to
the network.
• For this purpose, the features and their spatial relations in the
patterns need to be stored.
• The pattern recall should take place even when the features and
their spatial relations are slightly disturbed due to noise and
distortion or due to natural variation of the pattern generating
process.
• The approximation of a pattern refers to the closeness of the
features and their spatial relations to the original stored pattern.
• Sometimes the data itself is actually stored through the weights,
as in the case of binary patterns. In this case the approximation
can be measured in terms of some distance, like Hamming
distance, between the patterns.
• Pattern storage is generally accomplished by a feedback network
consisting of processing units with nonlinear output functions.
• Associated with each output state is an energy (to be defined
later) which depends on the network parameters like the weights
and bias, besides the state of the network.
• The energy as a function of the state of the network corresponds
to something like an energy landscape.
Hopfield network:
• The Hopfield model is a fully connected feedback network with
symmetric weights.
Wij = Wji Wii = 0
• In the discrete Hopfield network the state update is
asynchronous and the units have binary/bipolar output functions.
Binary (0/1)
B ipolar(-1/1)
Architecture of Discrete Hopfield Network
Competitive learning neural networks
• The output of each of these units is given to all the units in the
second layer (output layer) with adaptive (adjustable) feedforward
weights.
• The output functions of the units in the second layer are either
linear or nonlinear depending on the task for which the network is
to be designed.
• The output of each unit in the second layer is fed back to itself
in a self-excitatory manner and to the other units in the layer in an
excitatory or inhibitory manner depending on the task.
• Generally the weights on the connections in the feedback layer
are nonadaptive or fixed. Such a combination of both feedforward
and feedback connection layers results in some kind of
competition among the activations of the units in the output layer,
and hence such networks are called competitive learning neural
networks.
Basic concepts of competitive learning:
• There will be competition among the output nodes.
• The output unit that has highest activation during training will be
declared as winner.
• This rule is called winner-takes-all, because the winner neuron only
will updated and remaining neurons will be unchanged.
• suppose if neuron yk wants to be winner, it should be greater than
all other output neurons.
• The binary output function f(y)=1 or 0.
• The updating of weight vector can be done by below formula:
Analysis of pattern clustering network

• In competitive learning when the input pattern is removed, only


one unit in the feedback layer will have nonzero activation.
• That unit may be designated as the winner for the input pattern.
• If the feed forward weights are suitably adjusted, each of the
units in the feedback layer can be made to win for a group of
similar input patterns. The corresponding learning is called
competitive learning.
• These units are connected among themselves with fixed weights
in an on-centre off-surround manner.
Ghodrati, M., Rajaei, K., & Ebrahimpour, R. (2014). The importance of visual features
in generic vs. specialized object recognition: a computational study. Frontiers in
computational neuroscience, 8, 78.
• In the pattern clustering task, the pattern classes are formed on
unlabelled input data, and hence the corresponding learning is
unsupervised.
• In the competitive learning the weights in the feedforward path
are adjusted only after the winner unit in the feedback layer is
identified for a given input pattern.
• The activation of the ith unit in the feedback layer for an input
vector x = (x1,.. xi, ..., xM)T is given by ,
•where wij is the (i, j)th element of the weight matrix W,
• Start with some initial random values for the weights. The
given set of input vectors are applied one by one in a random
order.
• For each input the winner unit in the feedback layer is
identified.
• The weights leading to the unit are adjusted in such a way that
the weight vector wk moves towards the input vector x
Analysis of Feature Mapping Network

• In the pattern clustering network using competitive learning,


only one unit in the feedback layer is made to win by appropriate
choice of the connections of the units in the feedback layer.
• The number of units corresponds to the number of possible
clusters into which the set of input pattern vectors are likely to
form.
• On the other hand, there are many situations where it is difficult
to group the input patterns into distinct groups.
• The patterns may form a continuum in some feature space, and
it is this kind of information that may be needed in some
applications.
• For example, it may be of interest to know how close a given input is
to some of the other patterns for which the feed forward path has
already been trained.
• In other words, it is of interest to have some order in the activation
of a unit in the feedback layer in relation to the activations of its
neighboring units.
• A feature mapping network is also a competitive learning network
with nonlinear output functions for units in the feedback layer, as in
the networks used for pattern clustering.
• But the main distinction is in the geometrical arrangement of the
output units, and the significance attached to the neighboring units
during training.
• During recall of information the activations of the neighboring units
in the output feedback layer suggest that the input patterns
corresponding to these units are similar.
• There are three methods to implement feature mapping process.
• In one method the output layer is organized into predefined
receptive fields, and the unsupervised learning should perform the
feature mapping by activating appropriate connections. This can
also be viewed as orientation selectivity.
• Another method is to modify the feedback connections in the
output layer instead of connecting them in an on centre off-
surround manner, the connections can be made as indicated by a
Mexican hat type function.
• The immediate neighbors of unit i are connected in an excitatory
(+ve weights) manner. Remaining are connected in an inhibitory
(-ve weights) manner to far off units.
• A third method of implementing the feature mapping process is
to use an architecture of a competitive learning network with on
centre off-sul'round type of connections among units, but at each
stage the weights are updated not only for the winning unit, but
also for the units in its neighborhood.
• The neighbourhood region may be progressively reduced during
learning. This is called self-organization network with Kohonen's
learning.
• The SOM can be thought of as the simple competitive learning
model with a neighborhood constraint on the output units.
• The output units are arranged in a spatial grid; for instance, 100
output units might form a 10x10 square grid.
• The amount of adjustment is determined by the distance in the
grid of a given output unit from the winning unit.
Associative memory

• Pattern storage is an obvious pattern recognition task that one


would like to realize using an artificial neural network.
• This is a memory function, where the network is expected to
store the pattern information (not data) for later recall.
• Artificial neural network behaves like an associative memory, in
which a pattern is associated with another pattern, or with itself.
Characteristics of associative memory:
• The network should have a large capacity, i.e., ability to store a
large number of patterns or pattern associations.
• The network should be fault tolerant in the sense that damage to
a few units or connections should not affect the performance in
recall significantly.
• The network should be able to recall the stored pattern or the
desired associated pattern even if the input pattern is distorted or
noisy.
• The network performance as an associative memory should
degrade only gracefully due to damage to some units or
connections, or due to noise or distortion in the input.
• The network should be flexible to accommodate new patterns
and can be able to eliminate unnecessary patterns.
Types of associative memories:
•Auto associative memories
•Hetero associative memories
•Bidirectional associative memories
•Multidirectional associative memories
•Temporal associative memories

Auto associative memories:


• This is a single layer neural network in which the input training
vector and the output target vectors are the same.
Auto associative memory architecture
Hetero associative memory:
• Similar to Auto Associative Memory network, this is also a
single layer neural network.
• In this network the input training vector and the output target
vectors are not the same.
• Hetero associative network is static in nature, hence, there would
be no non-linear and delay operations.
Hetero associative memory architecture.
Bidirectional associative memory:
• The objective is to store a set of pattern pairs in such a way that
any stored pattern pair can be recalled by giving either of the
patterns as input.
• The network is a two-layer hetero associative neural network that
encod9 binary or bipolar pattern pairs (al, bl) using the Hebbian
learning.

• The BAM weight matrix from the first layer to the second layer is
given by

• The weight matrix from the second layer to the first layer is given
by
The energy function given by

The change in energy due to change in ai is given by


Multidirectional Assoclatlve Memory:
• The bidirectional associative memory concept can be generalized to
store associations among more than two patterns.
• The multiple association memory is also called multidirectional
associative memory.
The weight matrices for the pairs of layers are given by
Temporal Associative Memory:
• The basic idea is that the adjacent overlapping pattern pairs are to
be stored in a BAM.
• The last pattern in the sequence is paired with the first pattern.
• Let al, a2, ..., aL be a sequence of L patterns, each with a
dimensionality of M.
• Then (al, a2), (a2, a3) ,..., (ai, ai + 1), ..., (aL - 1, aL) and (aL, a1) form
the pattern pairs to be stored in the BAM.
• The weight matrix in the forward direction is given by

You might also like