SlideShare a Scribd company logo
CCS355
NEURAL NETWORKS & DEEP LEARNING
UNIT – 2 NOTES
BE
III YEAR – VI SEM (R21)
(2023-2024)
Prepared
By
Asst.Prof.M.Gokilavani
Department of Information Technology
CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
UNIT II - ASSOCIATIVE MEMORY AND UNSUPERVISED LEARNING NETWORKS
Training Algorithms for Pattern Association-Auto associative Memory Network-Hetero associative
Memory Network-Bidirectional Associative Memory (BAM)-Hopfield Networks-Iterative Auto
associative Memory Networks-Temporal Associative Memory Network-Fixed Weight Competitive Nets-
Kohonen Self-Organizing Feature Maps-Learning Vector Quantization-Counter propagation Networks-
Adaptive Resonance Theory Network.
1. INTRODUCTION:
 An associative memory network can store a set of patterns as memories.
 When the associative memory is being presented with a key pattern, it responds by producing one of
the stored patterns, which closely resembles or relates to the key pattern.
 Thus, the recall is through association of the key pattern, with the help of information memorized.
 These types of memories are also called as content-addressable memories (CAM).
 The CAM can also be viewed as associating data to address, i.e.; for every data in the memory there
is a corresponding unique address.
 Also, it can be viewed as data correlate. Here input data is correlated with that of the stored data in
the CAM.
 It should be noted that the stored patterns must be unique, i.e., different patterns in each location.
 If the same pattern exists in more than one location in the CAM, then, even though the correlation is
correct, the address is noted to be ambiguous. Associative memory makes a parallel search within a
stored data file.
 The concept behind this search is to Output any one or all stored items which match the given search
argument.
TRAINING ALGORITHMS FOR PATTERN ASSOCIATION:
There are two algorithms developed for training of pattern association nets.
i. Hebb Rule
ii. Outer Products Rule
i. Hebb Rule: The Hebb rule is widely used for finding the weights of an associative memory
neural network. The training vector pairs here are denoted as s: t. the weights are updated until
there is no weight change.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
2. Outer Products Rule: Outer products rule is a method for finding weights of an associative net.
ASSOCIATIVE MEMORY:
 Associative memory is also known as content addressable memory (CAM) or associative
storage or associative array.
 It is a special type of memory that is optimized for performing searches through data, as
opposed to providing a simple direct access to the data based on the address.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
 It can store the set of patterns as memories when the associative memory is being presented
with a key pattern, it responds by producing one of the stored pattern which closely resembles
or relates to the key pattern.
 It can be viewed as data correlation here. Input data is correlated with that of stored data in
the CAM.
There two types of associative memories
i. Auto Associative Memory Network
ii. Hetero Associative memory Network
2. AUTO ASSOCIATIVE MEMORY NETWORK:
 An auto-associative memory recovers a previously stored pattern that most closely relates to the
current pattern. It is also known as an auto-associative correlate.
 In the auto associative memory network, the training input vector and training output vector are
the same.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
AUTO ASSOCIATIVE MEMORY ALGORITHM:
2. HETERO ASSOCIATIVE MEMORY NETWORK:
 In a hetero-associate memory, the training input and the target output vectors are different.
 The weights are determined in a way that the network can store a set of pattern associations. The
association here is a pair of training input target output vector pairs (s (p), t(p)), with p = 1,2,…p.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
 Each vector s(p) has n components and each vector t(p) has m components. The determination
of weights is done either by using Hebb rule or delta rule.
 The net finds an appropriate output vector, which corresponds to an input vector x, that may be
either one of the stored patterns or a new pattern.
HETERO ASSOCIATIVE MEMORY ALGORITHM
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
4. BIDIRECTIONAL ASSOCIATIVE MEMORY (BAM):
 Bidirectional associative memory (BAM), first proposed by Bart Kosko in the year 1988.
 The BAM network performs forward and backward associative searches for stored stimulus
responses.
 The BAM is a recurrent hetero associative pattern-marching network that encodes binary or
bipolar patterns using Hebbian learning rule.
 It associates patterns, say from set A to patterns from set B and vice versa is also performed.
 BAM neural nets can respond to input from either layers (input layer and output layer).
BAM ARCHITECTURE:
 The architecture of BAM network consists of two layers of neurons which are connected by
directed weighted pare interconnections.
 The network dynamics involve two layers of interaction. The BAM network iterates by sending
the signals back and forth between the two layers until all the neurons reach equilibrium.
 The weights associated with the network are bidirectional. Thus, BAM can respond to the inputs
in either layer.
 Figure shows a BAM network consisting of n units in X layer and m units in Y layer. The layers
can be connected in both directions (bidirectional) with the result the weight matrix sent from
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
the X layer to the Y layer is W and the weight matrix for signals sent from the Y layer to the X
layer is WT
. Thus, the Weight matrix is calculated in both directions.
Determination of Weights:
Let the input vectors be denoted by s (p) and target vectors by t (p). p = 1... P. Then the weight
matrix to store a set of input and target vectors,
Where,
can be determined by Hebb rule training a1gorithm. In case of input vectors being binary, the
weight matrix W = {wij} is given by
When the input vectors are bipolar, the weight matrix W = {wij} can be defined as
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
TESTING ALGORITHM FOR DISCRETE BIDIRECTIONAL ASSOCIATIVE MEMORY:
5. HOPFIELD NETWORKS:
 Hopfield neural network was proposed by John J. Hopfield in 1982. It is an auto-associative fully
interconnected single layer feedback network. It is a symmetrically weighted network (i.e., Wij =
Wji).
 The Hopfield network is commonly used for auto-association and optimization tasks.
The Hopfield network is of two types
 Discrete Hopfield Network
 Continuous Hopfield Network
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
i. DISCRETE HOPFIELD NETWORK: When this is operated in discrete line fashion it is called as
discrete Hopfield network. The network takes two-valued inputs: binary (0, 1) or bipolar (+1, -1); the use
of bipolar inputs makes the analysis easier. The network has symmetrical weights with no self-
connections, i.e.,
Architecture of Discrete Hopfield Network
 The Hopfield's model consists of processing elements with two outputs, one inverting and the other
non-inverting.
 The outputs from each processing element are fed back to the input of other processing elements but
not to itself.
Training Algorithm of Discrete Hopfield Network
 During training of discrete Hopfield network, weights will be updated. As we know that we can have
the binary input vectors as well as bipolar input vectors.
 Let the input vectors be denoted by s (p), p = 1... P. Then the weight matrix W to store a set of input
vectors, where,
 In case of input vectors being binary, the weight matrix W = {wij} is given by
When the input vectors are bipolar, the weight matrix W = {wij} can be defined as
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
II. CONTINUOUS HOPFIELD NETWORK
 Continuous network has time as a continuous variable, and can be used for associative memory
problems or optimization problems like traveling salesman problem.
 The nodes of this network have a continuous, graded output rather than a two state binary output.
Thus, the energy of the network decreases continuously with time.
The output is defined as:
Where,
 vi = output from the continuous Hopfield network
 ui = internal activity of a node in continuous Hopfield network.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
Energy Function
 The Hopfield networks have an energy function associated with them. It either diminishes or
remains unchanged on update (feedback) after every iteration.
 The energy function for a continuous Hopfield network is defined as:
To determine if the network will converge to a stable configuration, we see if the energy function
reaches its minimum by:
The network is bound to converge if the activity of each neuron time is given by the
following differential equation:
DIFFERENT BETWEEN AUTO ASSOCIATIVE MEMORY AND HETERO ASSOCIATIVE
MEMORY:
S.NO AUTO ASSOCIATIVE MEMORY HETERO ASSOCIATIVE MEMORY
1. The input and output of S and T are same The input and output of S and T are
different.
2. Recalls a memory of same modality as the
one that evoked it.
Recalls a memory of different in character
from output.
3. An Auto Associative memory retrieves the
same pattern.
An Auto Associative memory retrieves
stored pattern.
4. Example: Color correction, Color
Consistency.
Example: Space transforms: Fourier,
Dimensionality Reduction: PCA.
6. FIXED WEIGHT COMPETITIVE NETS: (Unsupervised Learning)
 During training process also the weights remains fixed in these competitive networks.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
 The idea of competition is used among neurons for enhancement of contrast in their activation
functions. In this, two networks:
 Maxnet
 Hamming networks
i. MAXNET
 Maxnet network was developed by Lippmann in 1987.
 The Maxnet serves as a sub net for picking the node whose input is larger.
 All the nodes present in this subnet are fully interconnected and there exist symmetrical weights
in all these weighted interconnections.
Architecture of Maxnet
 The architecture of Maxnet is a fixed symmetrical weights are present over the weighted
interconnections.
 The weights between the neurons are inhibitory and fixed. The Maxnet with this structure can
be used as a subnet to select a particular node whose net input is the largest.
Testing Algorithm of Maxnet
The Maxnet uses the following activation function:
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
Testing algorithm
Step 0: Initial weights and initial activations are set. The weight is set as [0 < ε < 1/m], where "m" is the
total number of nodes. Let
Step 1: Perform Steps 2-4, when stopping condition is false.
Step 2: Update the activations of each node. For j = 1 to m,
Step 3: Save the activations obtained for use in the next iteration. For j = 1 to m,
Step 4: Finally, test the stopping condition for convergence of the network. The following is the stopping
condition: If more than one node has a nonzero activation, continue; else stop.
II. HAMMING NETWORK
 The Hamming network is a two-layer feed forward neural network for classification of binary
bipolar n-tuple input vectors using minimum Hamming distance denoted as DH(Lippmann,
1987).
 The first layer is the input layer for the n-tuple input vectors. The second layer (also called the
memory layer) stores p memory patterns.
 A p-class Hamming network has p output neurons in this layer.
 The strongest response of a neuron is indicative of the minimum Hamming distance between the
stored pattern and the input vector.
Hamming Distance
Hamming distance of two vectors, x and y of dimension n
x.y = a - d
Where:
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
a is number of bits in agreement in x & y(No.of Similarities bits in x & y),
d is number of bits different in x and y (No.of Dissimilarities bits in x & y).
The value "a - d" is the Hamming distance existing between two vectors.
Since, the total number of components is n, we have,
From the above equation, it is clearly understood that the weights can be set to one-half the exemplar
vector and bias can be set initially to n/2.
Testing Algorithm of Hamming Network:
Step 0: Initialize the weights. For i = 1 to n and j = 1 to m,
Initialize the bias for storing the "m" exemplar vectors. For j = 1 to m,
Step 1: Perform Steps 2-4 for each input vector x.
Step 2: Calculate the net input to each unit Yj, i.e.,
Step 3: Initialize the activations for Maxnet, i.e.,
Step 4: Maxnet is found to iterate for finding the exemplar that best matches the input patterns.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
7. KOHONEN SELF-ORGANIZING FEATURE MAPS:
 Self-Organizing Feature Maps (SOM) was developed by Dr. Teuvo Kohonen in 1982. Kohonen
Self-Organizing feature map (KSOM) refers to a neural network, which is trained using
competitive learning.
 Basic competitive learning implies that the competition process takes place before the cycle of
learning.
 The competition process suggests that some criteria select a winning processing element.
 After the winning processing element is selected, its weight vector is adjusted according to the
used learning law.
 Feature mapping is a process which converts the patterns of arbitrary dimensionality into a
response of one or two dimensions array of neurons.
 The network performing such a mapping is called feature map.
 The reason for reducing the higher dimensionality, the ability to preserve the neighbor topology.
Training Algorithm
Step 0: Initialize the weights with Random values and the learning rate
Step 1: Perform Steps 2-8 when stopping condition is false.
Step 2: Perform Steps 3-5 for each input vector x.
Step 3: Compute the square of the Euclidean distance, i.e., for each j = i to m,
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
Step 4: Find the winning unit index J, so that D (J) is minimum.
Step 5: For all units j within a specific neighborhood of J and for all i, calculate the new weights:
Step 6: Update the learning rare a using the formula (t is timestamp)
Step 7: Reduce radius of topological neighborhood at specified time intervals.
Step 8: Test for stopping condition of the network.
8. LEARNING VECTOR QUANTIZATION:
 In 1980, Finnish Professor Kohonen discovered that some areas of the brain develop structures
with different areas, each of them with a high sensitive for a specific input pattern.
 It is based on competition among neural units based on a principle called winner-takes-all.
 Learning Vector Quantization (LVQ) is a prototype-based supervised classification algorithm.
 A prototype is an early sample, model, or release of a product built to test a concept or process.
 One or more prototypes are used to represent each class in the dataset. New (unknown) data
points are then assigned the class of the prototype that is nearest to them. In order for "nearest"
to make sense, a distance measure has to be defined.
 There is no limitation on how many prototypes can be used per class, the only requirement being
that there is at least one prototype for each class.
 LVQ is a special case of an artificial neural network and it applies a winner-take-all Hebbian
learning-based approach.
 With a small difference, it is similar to Self-Organizing Maps (SOM) algorithm. SOM and LVQ
were invented by Teuvo Kohonen.
 LVQ system is represented by prototypes W=(W1....,Wn). In winner-take-all training algorithms,
the winner is moved closer if it correctly classifies the data point or moved away if it classifies
the data point incorrectly.
 An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the
respective application domain.
Training Algorithm
Step 0: Initialize the reference vectors.
This can be done using the following steps.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
From the given set of training vectors, take the first "m" (number of clusters) training vectors
and use them as weight vectors, the remaining vectors can be used for training.
Assign the initial weights and classifications randomly.
K-means clustering method.
Set initial learning rate α
Step l: Perform Steps 2-6 if the stopping condition is false.
Step 2: Perform Steps 3-4 for each training input vector x
Step 3: Calculate the Euclidean distance; for i = 1 to n, j = 1 to m,
Find the winning unit index J, when D (J) is minimum
Step 4: Update the weights on the winning unit, Wj using the following conditions.
Step 5: Reduce the learning rate α
Step 6: Test for the stopping condition of the training process. (The stopping conditions may be fixed
number of epochs or if learning rare has reduced to a negligible value.)
9. COUNTER PROPAGATION NETWORKS:
 Counter propagation network (CPN) were proposed by Hecht Nielsen in 1987.They are
multilayer network based on the combinations of the input, output, and clustering layers.
 The application of counter propagation net are data compression, function approximation and
pattern association.
 The counter propagation network is basically constructed from an instar-outstar model.
 This model is three layer neural network that performs input-output data mapping, producing an
output vector y in response to input vector x, on the basis of competitive learning.
 The three layer in an instar-out star model are the input layer, the hidden (competitive) layer and
the output layer.
 There are two stages involved in the training process of a counter propagation net.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
 The input vector are clustered in the first stage.
 In the second stage of training, the weights from the cluster layer units to the output units are
tuned to obtain the desired response.
There are two types of counter propagation network:
i. Full counter propagation network
ii. Forward-only counter propagation network
I. FULL COUNTER PROPAGATION NETWORK:
 Full CPN efficiently represents a large number of vector pair x: y by adaptively constructing a
look-up-table.
 The full CPN works best if the inverse function exists and is continuous.
 The vector x and y propagate through the network in a counter flow manner to yield output vector
x* and y*.
Architecture of Full Counter propagation Network
 The four major components of the instar-out star model are the input layer, the instar, the
competitive layer and the out star.
 For each node in the input layer there is an input value xi. All the instar are grouped into a layer
called the competitive layer.
 Each of the instar responds maximally to a group of input vectors in a different region of space.
 An out star model is found to have all the nodes in the output layer and a single node in the
competitive layer. The out star looks like the fan-out of a node.
Training Algorithm for Full Counter propagation Network:
Step 0: Set the initial weights and the initial learning rare.
Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training.
Step 2: For each of the training input vector pair x: y presented, perform Steps 3-5.
Step 3: Make the X-input layer activations to vector X. Make the Y-input layer activations to vector Y.
Step 4: Find the winning cluster unit. If dot product method is used, find the cluster unit Zj with target
net input: for j = 1 to p.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input vectors
is the smallest
If there occurs a tie in case of selection of winner unit, the unit with the smallest index is the winner.
Take the winner unit index as J.
Step 5: Update the weights over the calculated winner unit Zj
Step 6: Reduce the learning rates α and β
Step 7: Test stopping condition for phase-I training.
Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training.
Step 9: Perform Steps 10-13 for each training input pair x: y. Here α and β are small constant values.
Step 10: Make the X-input layer activations to vector x. Make the Y-input layer activations to vector y.
Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J.
Step 12: Update the weights entering into unit ZJ
Step 13: Update the weights from unit Zj to the output layers.
Step 14: Reduce the learning rates a and b.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
Step 15: Test stopping condition for phase-II training.
II. FORWARD-ONLY COUNTER PROPAGATION NETWORK:
 A simplified version of full CPN is the forward-only CPN.
 Forward-only CPN uses only the x vector to form the cluster on the Kohonen units during phase
I training.
 In case of forward-only CPN, first input vectors are presented to the input units.
 First, the weights between the input layer and cluster layer are trained.
 Then the weights between the cluster layer and output layer are trained.
 This is a specific competitive network, with target known.
Architecture of forward-only CPN
 It consists of three layers: input layer, cluster layer and output layer.
 Its architecture resembles the back-propagation network, but in CPN there exists
interconnections between the units in the cluster layer.
Training Algorithm for Forward-only Counter propagation network:
Step 0: Initial the weights and learning rare.
Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training.
Step 2: Perform Steps 3-5 for each of training input X
Step 3: Set the X-input layer activations to vector X.
Step 4: Compute the winning cluster unit (J). If dot product method is used, find the cluster unit zj with
the largest net input.
If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input patterns
is the smallest
If there exists a tie in the selection of winner unit, the unit with the smallest index is chosen as the
winner.
Step 5: Perform weight updating for unit Zj. For i= 1 to n,
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
Step 6: Reduce the learning rates α
Step 7: Test stopping condition for phase-I training.
Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training.
Step 9: Perform Steps 10-13 for each training input Pair x: y.
Step 10: Set X-input layer activations to vector X. Sec Y-output layer activations to vector Y.
Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J.
Step 12: Update the weights entering into unit ZJ,
Step 13: Update the weights from unit Zj to the output layers.
Step 14: Reduce the learning rates β.
Step 15: Test stopping condition for phase-II training.
10. ADAPTIVE RESONANCE THEORY NETWORK.
 The Adaptive Resonance Theory (ART) was incorporated as a hypothesis for human cognitive
data handling.
 The hypothesis has prompted neural models for pattern recognition and unsupervised learning.
ART system has been utilized to clarify different types of cognitive and brain data.
 The Adaptive Resonance Theory addresses the stability-plasticity (stability can be defined as
the nature of memorizing the learning and plasticity refers to the fact that they are flexible to
gain new information) dilemma of a system that asks how learning can proceed in response to
huge input patterns and simultaneously not to lose the stability for irrelevant patterns.
 Other than that, the stability-elasticity dilemma is concerned about how a system can adapt new
data while keeping what was learned before.
 For such a task, a feedback mechanism is included among the ART neural network layers.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
 In this neural network, the data in the form of processing elements output reflects back and ahead
among layers. If an appropriate pattern is build-up, the resonance is reached, then adaption can
occur during this period.
 It can be defined as the formal analysis of how to overcome the learning instability accomplished
by a competitive learning model, let to the presentation of an expended hypothesis,
called adaptive resonance theory (ART).
 This formal investigation indicated that a specific type of top-down learned feedback and
matching mechanism could significantly overcome the instability issue.
 It was understood that top-down attentional mechanisms, which had prior been found through
an investigation of connections among cognitive and reinforcement mechanisms, had similar
characteristics as these code-stabilizing mechanisms.
 In other words, once it was perceived how to solve the instability issue formally, it also turned
out to be certain that one did not need to develop any quantitatively new mechanism to do so.
 One only needed to make sure to incorporate previously discovered attentional mechanisms.
These additional mechanisms empower code learning to self- stabilize in response to an
essentially arbitrary input system.
 Grossberg presented the basic principles of the adaptive resonance theory.
 A category of ART called ART1 has been described as an arrangement of ordinary differential
equations by carpenter and Grossberg. These theorems can predict both the order of search as
the function of the learning history of the system and the input patterns.
 ART1 is an unsupervised learning model primarily designed for recognizing binary patterns.
 It comprises an attentional subsystem, an orienting subsystem, a vigilance parameter, and a reset
module, as given in the figure given below.
 The vigilance parameter has a huge effect on the system. High vigilance produces higher detailed
memories.
 The ART1 attentional comprises of two competitive networks, comparison field layer L1 and
the recognition field layer L2, two control gains, Gain1 and Gain2, and two short-term memory
(STM) stages S1 and S2.
 Long term memory (LTM) follows somewhere in the range of S1 and S2 multiply the signal in
these pathways.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
 Gains control empowers L1 and L2 to recognize the current stages of the running cycle. STM
reset wave prevents active L2 cells when mismatches between bottom-up and top-down signals
happen at L1.
 The comparison layer gets the binary external input passing it to the recognition layer liable for
coordinating it to a classification category.
 This outcome is given back to the comparison layer to find out when the category coordinates
the input vector.
 If there is a match, then a new input vector is read, and the cycle begins once again. If there is a
mismatch, then the orienting system is in charge of preventing the previous category from getting
a new category match in the recognition layer.
 The given two gains control the activity of the recognition and the comparison layer,
respectively.
 The reset wave specifically and enduringly prevents active L2 cell until the current is stopped.
The offset of the input pattern ends its processing L1 and triggers the offset of Gain2. Gain2
offset causes consistent decay of STM at L2 and thereby prepares L2 to encode the next input
pattern without bias.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
ART1 Implementation process:
 ART1 is a self-organizing neural network having input and output neurons mutually couple using
bottom-up and top-down adaptive weights that perform recognition.
 To start our methodology, the system is first trained as per the adaptive resonance theory by
inputting reference pattern data under the type of 5*5 matrix into the neurons for clustering
within the output neurons.
 Next, the maximum number of nodes in L2 is defined following by the vigilance parameter. The
inputted pattern enrolled itself as short term memory activity over a field of nodes L1.
 Combining and separating pathways from L1 to coding field L2, each weighted by an adaptive
long-term memory track, transform into a net signal vector T.
 Internal competitive dynamics at L2 further transform T, creating a compressed code or content
addressable memory.
 With strong competition, activation is concentrated at the L2 node that gets the maximal L1 →
L2 signal.
 The primary objective of this work is divided into four phases as follows Comparison,
recognition, search, and learning.
Advantage of adaptive learning theory (ART):
 It can be coordinated and utilized with different techniques to give more precise outcomes.
 It doesn't ensure stability in forming clusters.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
 It can be used in different fields such as face recognition, embedded system, and robotics, target
recognition, medical diagnosis, signature verification, etc.
 It shows stability and is not disturbed by a wide range of inputs provided to inputs.
 It has got benefits over competitive learning. The competitive learning cant include new clusters
when considered necessary.
Application of ART:
 ART stands for Adaptive Resonance Theory. ART neural networks used for fast, stable learning
and prediction have been applied in different areas.
 The application incorporates target recognition, face recognition, medical diagnosis, signature
verification, mobile control robot.
i. Target recognition:
 Fuzzy ARTMAP neural network can be used for automatic classification of targets depend on
their radar range profiles.
 Tests on synthetic data show the fuzzy ARTMAP can result in substantial savings in memory
requirements when related to k nearest neighbor (kNN) classifiers.
 The utilization of multi wavelength profiles mainly improves the performance of both kinds of
classifiers.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
ii. Medical diagnosis:
 Medical databases present huge numbers of challenges found in general information
management settings where speed, use, efficiency, and accuracy are the prime concerns.
 A direct objective of improved computer-assisted medicine is to help to deliver intensive care in
situations that may be less than ideal.
 Working with these issues has stimulated several ART architecture developments, including
ARTMAP-IC.
iii. Signature verification:
 Automatic signature verification is a well-known and active area of research with various
applications such as bank check confirmation, ATM access, etc.
 The training of the network is finished using ART1 that uses global features as input vector and
the verification and recognition phase uses a two-step process.
 In the initial step, the input vector is coordinated with the stored reference vector, which was
used as a training set, and in the second step, cluster formation takes place.
iv. Mobile control robot:
 Nowadays, we perceive a wide range of robotic devices. It is still a field of research in their
program part, called artificial intelligence.
 The human brain is an interesting subject as a model for such an intelligent system. Inspired by
the structure of the human brain, an artificial neural emerges.
 Similar to the brain, the artificial neural network contains numerous simple computational units,
neurons that are interconnected mutually to allow the transfer of the signal from the neurons to
neurons.
 Artificial neural networks are used to solve different issues with good outcomes compared to
other decision algorithms.
Limitations of ART:
Some ART networks are contradictory as they rely on the order of the training data, or upon the learning
rate.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
2 MARKS QUESTIONS AND ANSWERS
1. What is recall?
Ans: If the input vector are uncorrelated, the Hebb rule will produce the correct weight and the
response of the net when tested with one of the training vectors will be perfect recall.
2. Explain Learning vector Quantization.
Ans: LVQ is adaptive data classification method. It is based on training data with desired class
information. LVQ uses unsupervised data clustering techniques to preprocessing the data set
and obtain cluster center.
3. What is meant by associative memory?
Ans: Associative memory is also known as content addressable memory (CAM) or associative
storage or associative array. It is a special type of memory that is optimized for performing
searches through data, as opposed to providing a simple direct access to the data based on the
address. It can store the set of patterns as memories when the associative memory is being
presented with a key pattern, it responds by producing one of the stored pattern which closely
resembles or relates to the key pattern. It can be viewed as data correlation here. Input data
is correlated with that of stored data in the CAM.
4. Define auto associative memory.
Ans: An auto-associative memory network, also known as a recurrent neural network, is
a type of associative memory that is used to recall a pattern from partial or degraded inputs.
In an auto-associative network, the output of the network is fed back into the input, allowing
the network to learn and remember the patterns it has been trained on. This type of memory
network is commonly used in applications such as speech and image recognition, where the
input data may be incomplete or noisy.
5. What is Hebbian Learning?
Ans: Hebb rule is the simplest and most common method of determining weights for an associative
memory neural net. It can be us with patterns are represented as either binary or bipolar vectors.
6. What is Bidirectional Associative memory (BAM)?
Ans: Bidirectional Associative Memory (BAM) is a type of artificial neural network designed for
storing and retrieving heterogeneous pattern pairs. It plays a crucial role in various applications,
such as password authentication, neural network models, and cognitive management.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
7. List the problems of BAM Network.
Ans:
 Storage capacity of the BAM: In the BAM, stored number of associations should not be
exceeded the number of neurons in the smaller layer.
 Incorrect convergence: Always the closest association may not be produced by BAM.
8. What is content addressable memory?
Ans: Content-addressable memory (CAM) is a special type of computer memory used in certain very-
high-speed searching applications.
 It is also known as associative memory or associative storage and compares input search data
against a table of stored data, and returns the address of matching data.
9. What are the delta rule for pattern association?
Ans: The delta rule is typically applied to the case in which pairs of pat- terns, consisting of an input
pattern and a target output pattern, are to be associated.
 When an input pattern is presented to an input layer of units, the appropriate output pattern will
appear on the output layer of units.
10. What is continuous BAM?
Ans: Continuous BAM transforms input smoothly and continuously into output in the range [0, 1]
using the logistic sigmoid function as the activation function for all units.
11. Which are the rules used in Hebb’s law?
Ans:
i. If two neurons on either side of a connection are activated synchronously, then the weight of
that connection is increased.
ii. If two neurons on either side of a connection are activated asynchronously, then the weight of
that connection is decreased.
12. What do you mean counter Propagation network?
Ans: Counter propagation Network is supervised neural network that can be used for multimodal
processing, but is not trained using the back propagation rule is the counter propagation network.
This network has been specifically developed to provide bidirectional mapping between input and
output training patterns.
13. What is Hopfield model?
Ans: The Hopfield model is a single layered recurrent network. Like the associative memory, it is
usually initialized with appropriate weights instead of being trained.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
14. Define Self-Organizing Map.
Ans: Self Organizing Map (or Kohonen Map or SOM) is a type of Artificial Neural Network
which is also inspired by biological models of neural systems from the 1970s.
 It follows an unsupervised learning approach and trained its network through a competitive
learning algorithm.
 SOM is used for clustering and mapping (or dimensionality reduction) techniques to map
multidimensional data onto lower-dimensional which allows people to reduce complex
problems for easy interpretation.
15. What is principle goal of the self-organizing map?
Ans: The principal goal of the Self Organizing Map (SOM) is to transform an incoming signal pattern
of arbitrary dimension into a one or two dimensional discrete map and to perform this
transformation adaptively in a topologically ordered fashion.
16. List the stags of the SOM algorithm.
Ans:
i. Initialization: Choose random values for the initial weight vectors wj.
ii. Sampling: Draw a sample training input vector x form the input space.
iii. Matching: Find the winner neuron I(x) with weight vector closest to input vector.
iv. Updating: Apply the weight update equation.
v. Continuation: Keep returning to step 2 until the feature map stops changing.
17. How does counter Propagation nets are trained?
Ans: Counter Propagation nets are trained in two stages:
i. First Stage: The input vectors are clustered. The Clusters that are formed may be based on
either the dot product metric or the Euclidean norm metric.
ii. Second stage: The weight from the cluster units to the output units are adapted to produce
the desired response.
18. List the possible drawback of counter propagation networks.
Ans:
 Training a counter Propagation network has the same difficulty associate with training a
Kohonen network.
CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani
 Counter Propagation networks tend to be larger than back propagation networks. If a certain
number of mapping are to be learned, the middle layer must have that many number of neurons.
19. How forward only differs from full counter propagation nets?
Ans:
 In full counter propagation only the X vectors to form the clusters on the Kohonen units during
the first stage of training.
 The original presentation of forward only counter propagation used the Euclidean distance
between the input vector and the weight vector for the Kohonen unit.
20. What is forward only counter propagation?
Ans:
 Is a simplified version of the full counter propagation.
 Are intended to approximate y=f(x) function that is not necessarily invertible.
 It may be used if the mapping from x to y is well define, but the mapping from y to x is not.
21. Define plasticity.
Ans: The ability of a net to respond to learn a new pattern equally well at any stage of learning called
plasticity.
22. List the components of ART1.
Ans: Components are as follows:
 The Short term memory layer (F1).
 The recognition layer (F2): It contains the long term memory of the system.
 Vigilance parameter (p): A parameter that control the generality of the memory. Larger p
means more detailed memories. Smaller p produces more general memories.
Ad

More Related Content

What's hot (20)

Neural Networks
Neural NetworksNeural Networks
Neural Networks
NikitaRuhela
 
Artificial Neural Network Topology
Artificial Neural Network TopologyArtificial Neural Network Topology
Artificial Neural Network Topology
Harshana Madusanka Jayamaha
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
mustafa aadel
 
unit4 Neural Networks and Deep Learning.pdf
unit4 Neural Networks and Deep Learning.pdfunit4 Neural Networks and Deep Learning.pdf
unit4 Neural Networks and Deep Learning.pdf
Rathiya R
 
neural networks
neural networksneural networks
neural networks
Ruchi Sharma
 
Neural network
Neural networkNeural network
Neural network
Ramesh Giri
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
Sangeeta Tiwari
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
Red Innovators
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural Networks
Dean Wyatte
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
Nagarajan
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
Mohd Faiz
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
GauravPandey319
 
Neural networks
Neural networksNeural networks
Neural networks
Rizwan Rizzu
 
Recurrent neural networks rnn
Recurrent neural networks   rnnRecurrent neural networks   rnn
Recurrent neural networks rnn
Kuppusamy P
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
NainaBhatt1
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Prakash K
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
Mohaiminur Rahman
 
Spiking neural network: an introduction I
Spiking neural network: an introduction ISpiking neural network: an introduction I
Spiking neural network: an introduction I
Dalin Zhang
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
Mohamed Talaat
 
Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)
spartacus131211
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
mustafa aadel
 
unit4 Neural Networks and Deep Learning.pdf
unit4 Neural Networks and Deep Learning.pdfunit4 Neural Networks and Deep Learning.pdf
unit4 Neural Networks and Deep Learning.pdf
Rathiya R
 
Artifical Neural Network and its applications
Artifical Neural Network and its applicationsArtifical Neural Network and its applications
Artifical Neural Network and its applications
Sangeeta Tiwari
 
Artificial nueral network slideshare
Artificial nueral network slideshareArtificial nueral network slideshare
Artificial nueral network slideshare
Red Innovators
 
Intro to Neural Networks
Intro to Neural NetworksIntro to Neural Networks
Intro to Neural Networks
Dean Wyatte
 
Introduction Of Artificial neural network
Introduction Of Artificial neural networkIntroduction Of Artificial neural network
Introduction Of Artificial neural network
Nagarajan
 
Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.Artificial Neural Network seminar presentation using ppt.
Artificial Neural Network seminar presentation using ppt.
Mohd Faiz
 
Artificial neural network
Artificial neural networkArtificial neural network
Artificial neural network
GauravPandey319
 
Recurrent neural networks rnn
Recurrent neural networks   rnnRecurrent neural networks   rnn
Recurrent neural networks rnn
Kuppusamy P
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
NainaBhatt1
 
Artificial Neural Network
Artificial Neural NetworkArtificial Neural Network
Artificial Neural Network
Prakash K
 
Neural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics CourseNeural network final NWU 4.3 Graphics Course
Neural network final NWU 4.3 Graphics Course
Mohaiminur Rahman
 
Spiking neural network: an introduction I
Spiking neural network: an introduction ISpiking neural network: an introduction I
Spiking neural network: an introduction I
Dalin Zhang
 
Artificial Neural Networks - ANN
Artificial Neural Networks - ANNArtificial Neural Networks - ANN
Artificial Neural Networks - ANN
Mohamed Talaat
 
Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)Artificial Neural Network(Artificial intelligence)
Artificial Neural Network(Artificial intelligence)
spartacus131211
 

Similar to CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf (20)

Unit2 Neural Networks and Deep Learning.pdf
Unit2 Neural Networks and Deep Learning.pdfUnit2 Neural Networks and Deep Learning.pdf
Unit2 Neural Networks and Deep Learning.pdf
Rathiya R
 
Associative Learning Artificial Intelligence
Associative Learning Artificial IntelligenceAssociative Learning Artificial Intelligence
Associative Learning Artificial Intelligence
21118057
 
Topic 3.NN and DL Hopfield Networks.pptx
Topic 3.NN and DL Hopfield Networks.pptxTopic 3.NN and DL Hopfield Networks.pptx
Topic 3.NN and DL Hopfield Networks.pptx
ManjulaRavichandran5
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
Akash Goel
 
Unit iii update
Unit iii updateUnit iii update
Unit iii update
Indira Priyadarsini
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
SowmyaJyothi3
 
Ann
Ann Ann
Ann
vini89
 
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
IOSR Journals
 
N ns 1
N ns 1N ns 1
N ns 1
Thy Selaroth
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
ShwethaShreeS
 
NN12345671234567890-9876543234567(Ass-4).pptx
NN12345671234567890-9876543234567(Ass-4).pptxNN12345671234567890-9876543234567(Ass-4).pptx
NN12345671234567890-9876543234567(Ass-4).pptx
SAKSHISHARMA686201
 
Artificial Neural Network for machine learning
Artificial Neural Network for machine learningArtificial Neural Network for machine learning
Artificial Neural Network for machine learning
2303oyxxxjdeepak
 
Game theory.pdf textbooks content Artificical
Game theory.pdf textbooks content ArtificicalGame theory.pdf textbooks content Artificical
Game theory.pdf textbooks content Artificical
webinartrainer
 
Machine Learning Neural Networks Artificial
Machine Learning Neural Networks ArtificialMachine Learning Neural Networks Artificial
Machine Learning Neural Networks Artificial
webinartrainer
 
Machine Learning Neural Networks Artificial Intelligence
Machine Learning Neural Networks Artificial IntelligenceMachine Learning Neural Networks Artificial Intelligence
Machine Learning Neural Networks Artificial Intelligence
webinartrainer
 
ai7.ppt
ai7.pptai7.ppt
ai7.ppt
MrHacker61
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
NaveenBhajantri1
 
Anfis (1)
Anfis (1)Anfis (1)
Anfis (1)
TarekBarhoum
 
Artificial Neuron network
Artificial Neuron network Artificial Neuron network
Artificial Neuron network
Smruti Ranjan Sahoo
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
vijaym148
 
Unit2 Neural Networks and Deep Learning.pdf
Unit2 Neural Networks and Deep Learning.pdfUnit2 Neural Networks and Deep Learning.pdf
Unit2 Neural Networks and Deep Learning.pdf
Rathiya R
 
Associative Learning Artificial Intelligence
Associative Learning Artificial IntelligenceAssociative Learning Artificial Intelligence
Associative Learning Artificial Intelligence
21118057
 
Topic 3.NN and DL Hopfield Networks.pptx
Topic 3.NN and DL Hopfield Networks.pptxTopic 3.NN and DL Hopfield Networks.pptx
Topic 3.NN and DL Hopfield Networks.pptx
ManjulaRavichandran5
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
Akash Goel
 
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdfNEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
NEURALNETWORKS_DM_SOWMYAJYOTHI.pdf
SowmyaJyothi3
 
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
Using Multi-layered Feed-forward Neural Network (MLFNN) Architecture as Bidir...
IOSR Journals
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
ShwethaShreeS
 
NN12345671234567890-9876543234567(Ass-4).pptx
NN12345671234567890-9876543234567(Ass-4).pptxNN12345671234567890-9876543234567(Ass-4).pptx
NN12345671234567890-9876543234567(Ass-4).pptx
SAKSHISHARMA686201
 
Artificial Neural Network for machine learning
Artificial Neural Network for machine learningArtificial Neural Network for machine learning
Artificial Neural Network for machine learning
2303oyxxxjdeepak
 
Game theory.pdf textbooks content Artificical
Game theory.pdf textbooks content ArtificicalGame theory.pdf textbooks content Artificical
Game theory.pdf textbooks content Artificical
webinartrainer
 
Machine Learning Neural Networks Artificial
Machine Learning Neural Networks ArtificialMachine Learning Neural Networks Artificial
Machine Learning Neural Networks Artificial
webinartrainer
 
Machine Learning Neural Networks Artificial Intelligence
Machine Learning Neural Networks Artificial IntelligenceMachine Learning Neural Networks Artificial Intelligence
Machine Learning Neural Networks Artificial Intelligence
webinartrainer
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
NaveenBhajantri1
 
ANNs have been widely used in various domains for: Pattern recognition Funct...
ANNs have been widely used in various domains for: Pattern recognition  Funct...ANNs have been widely used in various domains for: Pattern recognition  Funct...
ANNs have been widely used in various domains for: Pattern recognition Funct...
vijaym148
 
Ad

More from Guru Nanak Technical Institutions (20)

22PCOAM16 ML Unit 3 Session 18 Learning with tree.pptx
22PCOAM16 ML Unit 3 Session 18 Learning with tree.pptx22PCOAM16 ML Unit 3 Session 18 Learning with tree.pptx
22PCOAM16 ML Unit 3 Session 18 Learning with tree.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx
22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx
22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx
Guru Nanak Technical Institutions
 
22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx
22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx
22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 ML Unit 3 Session 19 Constructing Decision Trees.pptx
22PCOAM16 ML Unit 3 Session 19  Constructing Decision Trees.pptx22PCOAM16 ML Unit 3 Session 19  Constructing Decision Trees.pptx
22PCOAM16 ML Unit 3 Session 19 Constructing Decision Trees.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
22PCOAM16 ML UNIT 2 NOTES & QB QUESTION WITH ANSWERS
Guru Nanak Technical Institutions
 
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
22PCOAM16 _ML_ Unit 2 Full unit notes.pdf
Guru Nanak Technical Institutions
 
22PCOAM16_ML_Unit 1 notes & Question Bank with answers.pdf
22PCOAM16_ML_Unit 1 notes & Question Bank with answers.pdf22PCOAM16_ML_Unit 1 notes & Question Bank with answers.pdf
22PCOAM16_ML_Unit 1 notes & Question Bank with answers.pdf
Guru Nanak Technical Institutions
 
22PCOAM16_MACHINE_LEARNING_UNIT_I_NOTES.pdf
22PCOAM16_MACHINE_LEARNING_UNIT_I_NOTES.pdf22PCOAM16_MACHINE_LEARNING_UNIT_I_NOTES.pdf
22PCOAM16_MACHINE_LEARNING_UNIT_I_NOTES.pdf
Guru Nanak Technical Institutions
 
22PCOAM16 Unit 2 Session 17 Support vector Machine.pptx
22PCOAM16 Unit 2 Session 17 Support vector Machine.pptx22PCOAM16 Unit 2 Session 17 Support vector Machine.pptx
22PCOAM16 Unit 2 Session 17 Support vector Machine.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 Unit 2 Session 16 Interpolations and Basic Functions.pptx
22PCOAM16 Unit 2 Session 16  Interpolations and Basic Functions.pptx22PCOAM16 Unit 2 Session 16  Interpolations and Basic Functions.pptx
22PCOAM16 Unit 2 Session 16 Interpolations and Basic Functions.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 Unit 2 Session 15 Curse of Dimensionality.pptx
22PCOAM16 Unit 2 Session 15  Curse of Dimensionality.pptx22PCOAM16 Unit 2 Session 15  Curse of Dimensionality.pptx
22PCOAM16 Unit 2 Session 15 Curse of Dimensionality.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 Unit 2 Session 14 RBF Network.pptx
22PCOAM16 Unit 2 Session 14 RBF Network.pptx22PCOAM16 Unit 2 Session 14 RBF Network.pptx
22PCOAM16 Unit 2 Session 14 RBF Network.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 Unit 2 Session 13 Radial Basis Functions and Splines.pptx
22PCOAM16 Unit 2 Session 13 Radial Basis Functions and Splines.pptx22PCOAM16 Unit 2 Session 13 Radial Basis Functions and Splines.pptx
22PCOAM16 Unit 2 Session 13 Radial Basis Functions and Splines.pptx
Guru Nanak Technical Institutions
 
22PCOAM16_UNIT 2_ Session 12 Deriving Back-Propagation .pptx
22PCOAM16_UNIT 2_ Session 12 Deriving Back-Propagation .pptx22PCOAM16_UNIT 2_ Session 12 Deriving Back-Propagation .pptx
22PCOAM16_UNIT 2_ Session 12 Deriving Back-Propagation .pptx
Guru Nanak Technical Institutions
 
22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx
22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx
22PCOAM16_UNIT 2_Session 10 Multi Layer Perceptrons.pptx
Guru Nanak Technical Institutions
 
22PCOAM16_UNIT 2_ Session 11 MLP Practice & Example .pptx
22PCOAM16_UNIT 2_ Session 11 MLP Practice & Example .pptx22PCOAM16_UNIT 2_ Session 11 MLP Practice & Example .pptx
22PCOAM16_UNIT 2_ Session 11 MLP Practice & Example .pptx
Guru Nanak Technical Institutions
 
22PCOAM16_UNIT 1_Session 8 Multi layer Perceptrons.pptx
22PCOAM16_UNIT 1_Session 8 Multi layer Perceptrons.pptx22PCOAM16_UNIT 1_Session 8 Multi layer Perceptrons.pptx
22PCOAM16_UNIT 1_Session 8 Multi layer Perceptrons.pptx
Guru Nanak Technical Institutions
 
22PCOAM16_UNIT 1_Session 3 concept Learning task.pptx
22PCOAM16_UNIT 1_Session 3 concept Learning task.pptx22PCOAM16_UNIT 1_Session 3 concept Learning task.pptx
22PCOAM16_UNIT 1_Session 3 concept Learning task.pptx
Guru Nanak Technical Institutions
 
22PCOAM16_UNIT 1_Session 7 Single layer Perceptrons.pptx
22PCOAM16_UNIT 1_Session 7 Single layer Perceptrons.pptx22PCOAM16_UNIT 1_Session 7 Single layer Perceptrons.pptx
22PCOAM16_UNIT 1_Session 7 Single layer Perceptrons.pptx
Guru Nanak Technical Institutions
 
22PCOAM16_UNIT 1_Session 9 Linear Regression.pptx
22PCOAM16_UNIT 1_Session 9 Linear Regression.pptx22PCOAM16_UNIT 1_Session 9 Linear Regression.pptx
22PCOAM16_UNIT 1_Session 9 Linear Regression.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx
22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx
22PCOAM16 ML Unit 3 Session 21 Classification and Regression Trees .pptx
Guru Nanak Technical Institutions
 
22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx
22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx
22PCOAM16 ML Unit 3 Session 20 ID3 Algorithm and working.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 ML Unit 3 Session 19 Constructing Decision Trees.pptx
22PCOAM16 ML Unit 3 Session 19  Constructing Decision Trees.pptx22PCOAM16 ML Unit 3 Session 19  Constructing Decision Trees.pptx
22PCOAM16 ML Unit 3 Session 19 Constructing Decision Trees.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 Unit 2 Session 16 Interpolations and Basic Functions.pptx
22PCOAM16 Unit 2 Session 16  Interpolations and Basic Functions.pptx22PCOAM16 Unit 2 Session 16  Interpolations and Basic Functions.pptx
22PCOAM16 Unit 2 Session 16 Interpolations and Basic Functions.pptx
Guru Nanak Technical Institutions
 
22PCOAM16 Unit 2 Session 13 Radial Basis Functions and Splines.pptx
22PCOAM16 Unit 2 Session 13 Radial Basis Functions and Splines.pptx22PCOAM16 Unit 2 Session 13 Radial Basis Functions and Splines.pptx
22PCOAM16 Unit 2 Session 13 Radial Basis Functions and Splines.pptx
Guru Nanak Technical Institutions
 
22PCOAM16_UNIT 2_ Session 12 Deriving Back-Propagation .pptx
22PCOAM16_UNIT 2_ Session 12 Deriving Back-Propagation .pptx22PCOAM16_UNIT 2_ Session 12 Deriving Back-Propagation .pptx
22PCOAM16_UNIT 2_ Session 12 Deriving Back-Propagation .pptx
Guru Nanak Technical Institutions
 
Ad

Recently uploaded (20)

Fort night presentation new0903 pdf.pdf.
Fort night presentation new0903 pdf.pdf.Fort night presentation new0903 pdf.pdf.
Fort night presentation new0903 pdf.pdf.
anuragmk56
 
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxLidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
RishavKumar530754
 
RICS Membership-(The Royal Institution of Chartered Surveyors).pdf
RICS Membership-(The Royal Institution of Chartered Surveyors).pdfRICS Membership-(The Royal Institution of Chartered Surveyors).pdf
RICS Membership-(The Royal Institution of Chartered Surveyors).pdf
MohamedAbdelkader115
 
Oil-gas_Unconventional oil and gass_reseviours.pdf
Oil-gas_Unconventional oil and gass_reseviours.pdfOil-gas_Unconventional oil and gass_reseviours.pdf
Oil-gas_Unconventional oil and gass_reseviours.pdf
M7md3li2
 
Data Structures_Introduction to algorithms.pptx
Data Structures_Introduction to algorithms.pptxData Structures_Introduction to algorithms.pptx
Data Structures_Introduction to algorithms.pptx
RushaliDeshmukh2
 
introduction to machine learining for beginers
introduction to machine learining for beginersintroduction to machine learining for beginers
introduction to machine learining for beginers
JoydebSheet
 
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G..."Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
Infopitaara
 
Mathematical foundation machine learning.pdf
Mathematical foundation machine learning.pdfMathematical foundation machine learning.pdf
Mathematical foundation machine learning.pdf
TalhaShahid49
 
fluke dealers in bangalore..............
fluke dealers in bangalore..............fluke dealers in bangalore..............
fluke dealers in bangalore..............
Haresh Vaswani
 
Smart_Storage_Systems_Production_Engineering.pptx
Smart_Storage_Systems_Production_Engineering.pptxSmart_Storage_Systems_Production_Engineering.pptx
Smart_Storage_Systems_Production_Engineering.pptx
rushikeshnavghare94
 
Smart Storage Solutions.pptx for production engineering
Smart Storage Solutions.pptx for production engineeringSmart Storage Solutions.pptx for production engineering
Smart Storage Solutions.pptx for production engineering
rushikeshnavghare94
 
Reagent dosing (Bredel) presentation.pptx
Reagent dosing (Bredel) presentation.pptxReagent dosing (Bredel) presentation.pptx
Reagent dosing (Bredel) presentation.pptx
AlejandroOdio
 
Raish Khanji GTU 8th sem Internship Report.pdf
Raish Khanji GTU 8th sem Internship Report.pdfRaish Khanji GTU 8th sem Internship Report.pdf
Raish Khanji GTU 8th sem Internship Report.pdf
RaishKhanji
 
ELectronics Boards & Product Testing_Shiju.pdf
ELectronics Boards & Product Testing_Shiju.pdfELectronics Boards & Product Testing_Shiju.pdf
ELectronics Boards & Product Testing_Shiju.pdf
Shiju Jacob
 
Explainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptx
Explainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptxExplainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptx
Explainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptx
MahaveerVPandit
 
five-year-soluhhhhhhhhhhhhhhhhhtions.pdf
five-year-soluhhhhhhhhhhhhhhhhhtions.pdffive-year-soluhhhhhhhhhhhhhhhhhtions.pdf
five-year-soluhhhhhhhhhhhhhhhhhtions.pdf
AdityaSharma944496
 
Avnet Silica's PCIM 2025 Highlights Flyer
Avnet Silica's PCIM 2025 Highlights FlyerAvnet Silica's PCIM 2025 Highlights Flyer
Avnet Silica's PCIM 2025 Highlights Flyer
WillDavies22
 
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E..."Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
Infopitaara
 
Process Parameter Optimization for Minimizing Springback in Cold Drawing Proc...
Process Parameter Optimization for Minimizing Springback in Cold Drawing Proc...Process Parameter Optimization for Minimizing Springback in Cold Drawing Proc...
Process Parameter Optimization for Minimizing Springback in Cold Drawing Proc...
Journal of Soft Computing in Civil Engineering
 
AI-assisted Software Testing (3-hours tutorial)
AI-assisted Software Testing (3-hours tutorial)AI-assisted Software Testing (3-hours tutorial)
AI-assisted Software Testing (3-hours tutorial)
Vəhid Gəruslu
 
Fort night presentation new0903 pdf.pdf.
Fort night presentation new0903 pdf.pdf.Fort night presentation new0903 pdf.pdf.
Fort night presentation new0903 pdf.pdf.
anuragmk56
 
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptxLidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
Lidar for Autonomous Driving, LiDAR Mapping for Driverless Cars.pptx
RishavKumar530754
 
RICS Membership-(The Royal Institution of Chartered Surveyors).pdf
RICS Membership-(The Royal Institution of Chartered Surveyors).pdfRICS Membership-(The Royal Institution of Chartered Surveyors).pdf
RICS Membership-(The Royal Institution of Chartered Surveyors).pdf
MohamedAbdelkader115
 
Oil-gas_Unconventional oil and gass_reseviours.pdf
Oil-gas_Unconventional oil and gass_reseviours.pdfOil-gas_Unconventional oil and gass_reseviours.pdf
Oil-gas_Unconventional oil and gass_reseviours.pdf
M7md3li2
 
Data Structures_Introduction to algorithms.pptx
Data Structures_Introduction to algorithms.pptxData Structures_Introduction to algorithms.pptx
Data Structures_Introduction to algorithms.pptx
RushaliDeshmukh2
 
introduction to machine learining for beginers
introduction to machine learining for beginersintroduction to machine learining for beginers
introduction to machine learining for beginers
JoydebSheet
 
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G..."Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
"Feed Water Heaters in Thermal Power Plants: Types, Working, and Efficiency G...
Infopitaara
 
Mathematical foundation machine learning.pdf
Mathematical foundation machine learning.pdfMathematical foundation machine learning.pdf
Mathematical foundation machine learning.pdf
TalhaShahid49
 
fluke dealers in bangalore..............
fluke dealers in bangalore..............fluke dealers in bangalore..............
fluke dealers in bangalore..............
Haresh Vaswani
 
Smart_Storage_Systems_Production_Engineering.pptx
Smart_Storage_Systems_Production_Engineering.pptxSmart_Storage_Systems_Production_Engineering.pptx
Smart_Storage_Systems_Production_Engineering.pptx
rushikeshnavghare94
 
Smart Storage Solutions.pptx for production engineering
Smart Storage Solutions.pptx for production engineeringSmart Storage Solutions.pptx for production engineering
Smart Storage Solutions.pptx for production engineering
rushikeshnavghare94
 
Reagent dosing (Bredel) presentation.pptx
Reagent dosing (Bredel) presentation.pptxReagent dosing (Bredel) presentation.pptx
Reagent dosing (Bredel) presentation.pptx
AlejandroOdio
 
Raish Khanji GTU 8th sem Internship Report.pdf
Raish Khanji GTU 8th sem Internship Report.pdfRaish Khanji GTU 8th sem Internship Report.pdf
Raish Khanji GTU 8th sem Internship Report.pdf
RaishKhanji
 
ELectronics Boards & Product Testing_Shiju.pdf
ELectronics Boards & Product Testing_Shiju.pdfELectronics Boards & Product Testing_Shiju.pdf
ELectronics Boards & Product Testing_Shiju.pdf
Shiju Jacob
 
Explainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptx
Explainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptxExplainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptx
Explainable-Artificial-Intelligence-XAI-A-Deep-Dive (1).pptx
MahaveerVPandit
 
five-year-soluhhhhhhhhhhhhhhhhhtions.pdf
five-year-soluhhhhhhhhhhhhhhhhhtions.pdffive-year-soluhhhhhhhhhhhhhhhhhtions.pdf
five-year-soluhhhhhhhhhhhhhhhhhtions.pdf
AdityaSharma944496
 
Avnet Silica's PCIM 2025 Highlights Flyer
Avnet Silica's PCIM 2025 Highlights FlyerAvnet Silica's PCIM 2025 Highlights Flyer
Avnet Silica's PCIM 2025 Highlights Flyer
WillDavies22
 
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E..."Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
"Boiler Feed Pump (BFP): Working, Applications, Advantages, and Limitations E...
Infopitaara
 
AI-assisted Software Testing (3-hours tutorial)
AI-assisted Software Testing (3-hours tutorial)AI-assisted Software Testing (3-hours tutorial)
AI-assisted Software Testing (3-hours tutorial)
Vəhid Gəruslu
 

CCS355 Neural Network & Deep Learning Unit II Notes with Question bank .pdf

  • 1. CCS355 NEURAL NETWORKS & DEEP LEARNING UNIT – 2 NOTES BE III YEAR – VI SEM (R21) (2023-2024) Prepared By Asst.Prof.M.Gokilavani Department of Information Technology
  • 3. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani UNIT II - ASSOCIATIVE MEMORY AND UNSUPERVISED LEARNING NETWORKS Training Algorithms for Pattern Association-Auto associative Memory Network-Hetero associative Memory Network-Bidirectional Associative Memory (BAM)-Hopfield Networks-Iterative Auto associative Memory Networks-Temporal Associative Memory Network-Fixed Weight Competitive Nets- Kohonen Self-Organizing Feature Maps-Learning Vector Quantization-Counter propagation Networks- Adaptive Resonance Theory Network. 1. INTRODUCTION:  An associative memory network can store a set of patterns as memories.  When the associative memory is being presented with a key pattern, it responds by producing one of the stored patterns, which closely resembles or relates to the key pattern.  Thus, the recall is through association of the key pattern, with the help of information memorized.  These types of memories are also called as content-addressable memories (CAM).  The CAM can also be viewed as associating data to address, i.e.; for every data in the memory there is a corresponding unique address.  Also, it can be viewed as data correlate. Here input data is correlated with that of the stored data in the CAM.  It should be noted that the stored patterns must be unique, i.e., different patterns in each location.  If the same pattern exists in more than one location in the CAM, then, even though the correlation is correct, the address is noted to be ambiguous. Associative memory makes a parallel search within a stored data file.  The concept behind this search is to Output any one or all stored items which match the given search argument. TRAINING ALGORITHMS FOR PATTERN ASSOCIATION: There are two algorithms developed for training of pattern association nets. i. Hebb Rule ii. Outer Products Rule i. Hebb Rule: The Hebb rule is widely used for finding the weights of an associative memory neural network. The training vector pairs here are denoted as s: t. the weights are updated until there is no weight change.
  • 4. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani 2. Outer Products Rule: Outer products rule is a method for finding weights of an associative net. ASSOCIATIVE MEMORY:  Associative memory is also known as content addressable memory (CAM) or associative storage or associative array.  It is a special type of memory that is optimized for performing searches through data, as opposed to providing a simple direct access to the data based on the address.
  • 5. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani  It can store the set of patterns as memories when the associative memory is being presented with a key pattern, it responds by producing one of the stored pattern which closely resembles or relates to the key pattern.  It can be viewed as data correlation here. Input data is correlated with that of stored data in the CAM. There two types of associative memories i. Auto Associative Memory Network ii. Hetero Associative memory Network 2. AUTO ASSOCIATIVE MEMORY NETWORK:  An auto-associative memory recovers a previously stored pattern that most closely relates to the current pattern. It is also known as an auto-associative correlate.  In the auto associative memory network, the training input vector and training output vector are the same.
  • 6. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani AUTO ASSOCIATIVE MEMORY ALGORITHM: 2. HETERO ASSOCIATIVE MEMORY NETWORK:  In a hetero-associate memory, the training input and the target output vectors are different.  The weights are determined in a way that the network can store a set of pattern associations. The association here is a pair of training input target output vector pairs (s (p), t(p)), with p = 1,2,…p.
  • 7. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani  Each vector s(p) has n components and each vector t(p) has m components. The determination of weights is done either by using Hebb rule or delta rule.  The net finds an appropriate output vector, which corresponds to an input vector x, that may be either one of the stored patterns or a new pattern. HETERO ASSOCIATIVE MEMORY ALGORITHM
  • 8. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani 4. BIDIRECTIONAL ASSOCIATIVE MEMORY (BAM):  Bidirectional associative memory (BAM), first proposed by Bart Kosko in the year 1988.  The BAM network performs forward and backward associative searches for stored stimulus responses.  The BAM is a recurrent hetero associative pattern-marching network that encodes binary or bipolar patterns using Hebbian learning rule.  It associates patterns, say from set A to patterns from set B and vice versa is also performed.  BAM neural nets can respond to input from either layers (input layer and output layer). BAM ARCHITECTURE:  The architecture of BAM network consists of two layers of neurons which are connected by directed weighted pare interconnections.  The network dynamics involve two layers of interaction. The BAM network iterates by sending the signals back and forth between the two layers until all the neurons reach equilibrium.  The weights associated with the network are bidirectional. Thus, BAM can respond to the inputs in either layer.  Figure shows a BAM network consisting of n units in X layer and m units in Y layer. The layers can be connected in both directions (bidirectional) with the result the weight matrix sent from
  • 9. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani the X layer to the Y layer is W and the weight matrix for signals sent from the Y layer to the X layer is WT . Thus, the Weight matrix is calculated in both directions. Determination of Weights: Let the input vectors be denoted by s (p) and target vectors by t (p). p = 1... P. Then the weight matrix to store a set of input and target vectors, Where, can be determined by Hebb rule training a1gorithm. In case of input vectors being binary, the weight matrix W = {wij} is given by When the input vectors are bipolar, the weight matrix W = {wij} can be defined as
  • 10. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani TESTING ALGORITHM FOR DISCRETE BIDIRECTIONAL ASSOCIATIVE MEMORY: 5. HOPFIELD NETWORKS:  Hopfield neural network was proposed by John J. Hopfield in 1982. It is an auto-associative fully interconnected single layer feedback network. It is a symmetrically weighted network (i.e., Wij = Wji).  The Hopfield network is commonly used for auto-association and optimization tasks. The Hopfield network is of two types  Discrete Hopfield Network  Continuous Hopfield Network
  • 11. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani i. DISCRETE HOPFIELD NETWORK: When this is operated in discrete line fashion it is called as discrete Hopfield network. The network takes two-valued inputs: binary (0, 1) or bipolar (+1, -1); the use of bipolar inputs makes the analysis easier. The network has symmetrical weights with no self- connections, i.e., Architecture of Discrete Hopfield Network  The Hopfield's model consists of processing elements with two outputs, one inverting and the other non-inverting.  The outputs from each processing element are fed back to the input of other processing elements but not to itself. Training Algorithm of Discrete Hopfield Network  During training of discrete Hopfield network, weights will be updated. As we know that we can have the binary input vectors as well as bipolar input vectors.  Let the input vectors be denoted by s (p), p = 1... P. Then the weight matrix W to store a set of input vectors, where,  In case of input vectors being binary, the weight matrix W = {wij} is given by When the input vectors are bipolar, the weight matrix W = {wij} can be defined as
  • 12. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani II. CONTINUOUS HOPFIELD NETWORK  Continuous network has time as a continuous variable, and can be used for associative memory problems or optimization problems like traveling salesman problem.  The nodes of this network have a continuous, graded output rather than a two state binary output. Thus, the energy of the network decreases continuously with time. The output is defined as: Where,  vi = output from the continuous Hopfield network  ui = internal activity of a node in continuous Hopfield network.
  • 13. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani Energy Function  The Hopfield networks have an energy function associated with them. It either diminishes or remains unchanged on update (feedback) after every iteration.  The energy function for a continuous Hopfield network is defined as: To determine if the network will converge to a stable configuration, we see if the energy function reaches its minimum by: The network is bound to converge if the activity of each neuron time is given by the following differential equation: DIFFERENT BETWEEN AUTO ASSOCIATIVE MEMORY AND HETERO ASSOCIATIVE MEMORY: S.NO AUTO ASSOCIATIVE MEMORY HETERO ASSOCIATIVE MEMORY 1. The input and output of S and T are same The input and output of S and T are different. 2. Recalls a memory of same modality as the one that evoked it. Recalls a memory of different in character from output. 3. An Auto Associative memory retrieves the same pattern. An Auto Associative memory retrieves stored pattern. 4. Example: Color correction, Color Consistency. Example: Space transforms: Fourier, Dimensionality Reduction: PCA. 6. FIXED WEIGHT COMPETITIVE NETS: (Unsupervised Learning)  During training process also the weights remains fixed in these competitive networks.
  • 14. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani  The idea of competition is used among neurons for enhancement of contrast in their activation functions. In this, two networks:  Maxnet  Hamming networks i. MAXNET  Maxnet network was developed by Lippmann in 1987.  The Maxnet serves as a sub net for picking the node whose input is larger.  All the nodes present in this subnet are fully interconnected and there exist symmetrical weights in all these weighted interconnections. Architecture of Maxnet  The architecture of Maxnet is a fixed symmetrical weights are present over the weighted interconnections.  The weights between the neurons are inhibitory and fixed. The Maxnet with this structure can be used as a subnet to select a particular node whose net input is the largest. Testing Algorithm of Maxnet The Maxnet uses the following activation function:
  • 15. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani Testing algorithm Step 0: Initial weights and initial activations are set. The weight is set as [0 < ε < 1/m], where "m" is the total number of nodes. Let Step 1: Perform Steps 2-4, when stopping condition is false. Step 2: Update the activations of each node. For j = 1 to m, Step 3: Save the activations obtained for use in the next iteration. For j = 1 to m, Step 4: Finally, test the stopping condition for convergence of the network. The following is the stopping condition: If more than one node has a nonzero activation, continue; else stop. II. HAMMING NETWORK  The Hamming network is a two-layer feed forward neural network for classification of binary bipolar n-tuple input vectors using minimum Hamming distance denoted as DH(Lippmann, 1987).  The first layer is the input layer for the n-tuple input vectors. The second layer (also called the memory layer) stores p memory patterns.  A p-class Hamming network has p output neurons in this layer.  The strongest response of a neuron is indicative of the minimum Hamming distance between the stored pattern and the input vector. Hamming Distance Hamming distance of two vectors, x and y of dimension n x.y = a - d Where:
  • 16. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani a is number of bits in agreement in x & y(No.of Similarities bits in x & y), d is number of bits different in x and y (No.of Dissimilarities bits in x & y). The value "a - d" is the Hamming distance existing between two vectors. Since, the total number of components is n, we have, From the above equation, it is clearly understood that the weights can be set to one-half the exemplar vector and bias can be set initially to n/2. Testing Algorithm of Hamming Network: Step 0: Initialize the weights. For i = 1 to n and j = 1 to m, Initialize the bias for storing the "m" exemplar vectors. For j = 1 to m, Step 1: Perform Steps 2-4 for each input vector x. Step 2: Calculate the net input to each unit Yj, i.e., Step 3: Initialize the activations for Maxnet, i.e., Step 4: Maxnet is found to iterate for finding the exemplar that best matches the input patterns.
  • 17. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani 7. KOHONEN SELF-ORGANIZING FEATURE MAPS:  Self-Organizing Feature Maps (SOM) was developed by Dr. Teuvo Kohonen in 1982. Kohonen Self-Organizing feature map (KSOM) refers to a neural network, which is trained using competitive learning.  Basic competitive learning implies that the competition process takes place before the cycle of learning.  The competition process suggests that some criteria select a winning processing element.  After the winning processing element is selected, its weight vector is adjusted according to the used learning law.  Feature mapping is a process which converts the patterns of arbitrary dimensionality into a response of one or two dimensions array of neurons.  The network performing such a mapping is called feature map.  The reason for reducing the higher dimensionality, the ability to preserve the neighbor topology. Training Algorithm Step 0: Initialize the weights with Random values and the learning rate Step 1: Perform Steps 2-8 when stopping condition is false. Step 2: Perform Steps 3-5 for each input vector x. Step 3: Compute the square of the Euclidean distance, i.e., for each j = i to m,
  • 18. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani Step 4: Find the winning unit index J, so that D (J) is minimum. Step 5: For all units j within a specific neighborhood of J and for all i, calculate the new weights: Step 6: Update the learning rare a using the formula (t is timestamp) Step 7: Reduce radius of topological neighborhood at specified time intervals. Step 8: Test for stopping condition of the network. 8. LEARNING VECTOR QUANTIZATION:  In 1980, Finnish Professor Kohonen discovered that some areas of the brain develop structures with different areas, each of them with a high sensitive for a specific input pattern.  It is based on competition among neural units based on a principle called winner-takes-all.  Learning Vector Quantization (LVQ) is a prototype-based supervised classification algorithm.  A prototype is an early sample, model, or release of a product built to test a concept or process.  One or more prototypes are used to represent each class in the dataset. New (unknown) data points are then assigned the class of the prototype that is nearest to them. In order for "nearest" to make sense, a distance measure has to be defined.  There is no limitation on how many prototypes can be used per class, the only requirement being that there is at least one prototype for each class.  LVQ is a special case of an artificial neural network and it applies a winner-take-all Hebbian learning-based approach.  With a small difference, it is similar to Self-Organizing Maps (SOM) algorithm. SOM and LVQ were invented by Teuvo Kohonen.  LVQ system is represented by prototypes W=(W1....,Wn). In winner-take-all training algorithms, the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.  An advantage of LVQ is that it creates prototypes that are easy to interpret for experts in the respective application domain. Training Algorithm Step 0: Initialize the reference vectors. This can be done using the following steps.
  • 19. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani From the given set of training vectors, take the first "m" (number of clusters) training vectors and use them as weight vectors, the remaining vectors can be used for training. Assign the initial weights and classifications randomly. K-means clustering method. Set initial learning rate α Step l: Perform Steps 2-6 if the stopping condition is false. Step 2: Perform Steps 3-4 for each training input vector x Step 3: Calculate the Euclidean distance; for i = 1 to n, j = 1 to m, Find the winning unit index J, when D (J) is minimum Step 4: Update the weights on the winning unit, Wj using the following conditions. Step 5: Reduce the learning rate α Step 6: Test for the stopping condition of the training process. (The stopping conditions may be fixed number of epochs or if learning rare has reduced to a negligible value.) 9. COUNTER PROPAGATION NETWORKS:  Counter propagation network (CPN) were proposed by Hecht Nielsen in 1987.They are multilayer network based on the combinations of the input, output, and clustering layers.  The application of counter propagation net are data compression, function approximation and pattern association.  The counter propagation network is basically constructed from an instar-outstar model.  This model is three layer neural network that performs input-output data mapping, producing an output vector y in response to input vector x, on the basis of competitive learning.  The three layer in an instar-out star model are the input layer, the hidden (competitive) layer and the output layer.  There are two stages involved in the training process of a counter propagation net.
  • 20. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani  The input vector are clustered in the first stage.  In the second stage of training, the weights from the cluster layer units to the output units are tuned to obtain the desired response. There are two types of counter propagation network: i. Full counter propagation network ii. Forward-only counter propagation network I. FULL COUNTER PROPAGATION NETWORK:  Full CPN efficiently represents a large number of vector pair x: y by adaptively constructing a look-up-table.  The full CPN works best if the inverse function exists and is continuous.  The vector x and y propagate through the network in a counter flow manner to yield output vector x* and y*. Architecture of Full Counter propagation Network  The four major components of the instar-out star model are the input layer, the instar, the competitive layer and the out star.  For each node in the input layer there is an input value xi. All the instar are grouped into a layer called the competitive layer.  Each of the instar responds maximally to a group of input vectors in a different region of space.  An out star model is found to have all the nodes in the output layer and a single node in the competitive layer. The out star looks like the fan-out of a node. Training Algorithm for Full Counter propagation Network: Step 0: Set the initial weights and the initial learning rare. Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training. Step 2: For each of the training input vector pair x: y presented, perform Steps 3-5. Step 3: Make the X-input layer activations to vector X. Make the Y-input layer activations to vector Y. Step 4: Find the winning cluster unit. If dot product method is used, find the cluster unit Zj with target net input: for j = 1 to p.
  • 21. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input vectors is the smallest If there occurs a tie in case of selection of winner unit, the unit with the smallest index is the winner. Take the winner unit index as J. Step 5: Update the weights over the calculated winner unit Zj Step 6: Reduce the learning rates α and β Step 7: Test stopping condition for phase-I training. Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training. Step 9: Perform Steps 10-13 for each training input pair x: y. Here α and β are small constant values. Step 10: Make the X-input layer activations to vector x. Make the Y-input layer activations to vector y. Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J. Step 12: Update the weights entering into unit ZJ Step 13: Update the weights from unit Zj to the output layers. Step 14: Reduce the learning rates a and b.
  • 22. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani Step 15: Test stopping condition for phase-II training. II. FORWARD-ONLY COUNTER PROPAGATION NETWORK:  A simplified version of full CPN is the forward-only CPN.  Forward-only CPN uses only the x vector to form the cluster on the Kohonen units during phase I training.  In case of forward-only CPN, first input vectors are presented to the input units.  First, the weights between the input layer and cluster layer are trained.  Then the weights between the cluster layer and output layer are trained.  This is a specific competitive network, with target known. Architecture of forward-only CPN  It consists of three layers: input layer, cluster layer and output layer.  Its architecture resembles the back-propagation network, but in CPN there exists interconnections between the units in the cluster layer. Training Algorithm for Forward-only Counter propagation network: Step 0: Initial the weights and learning rare. Step 1: Perform Steps 2-7 if stopping condition is false for phase-I training. Step 2: Perform Steps 3-5 for each of training input X Step 3: Set the X-input layer activations to vector X. Step 4: Compute the winning cluster unit (J). If dot product method is used, find the cluster unit zj with the largest net input. If Euclidean distance method is used, find the cluster unit Zj whose squared distance from input patterns is the smallest If there exists a tie in the selection of winner unit, the unit with the smallest index is chosen as the winner. Step 5: Perform weight updating for unit Zj. For i= 1 to n,
  • 23. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani Step 6: Reduce the learning rates α Step 7: Test stopping condition for phase-I training. Step 8: Perform Steps 9-15 when stopping condition is false for phase-II training. Step 9: Perform Steps 10-13 for each training input Pair x: y. Step 10: Set X-input layer activations to vector X. Sec Y-output layer activations to vector Y. Step 11: Find the winning cluster unit (use formulas from Step 4). Take the winner unit index as J. Step 12: Update the weights entering into unit ZJ, Step 13: Update the weights from unit Zj to the output layers. Step 14: Reduce the learning rates β. Step 15: Test stopping condition for phase-II training. 10. ADAPTIVE RESONANCE THEORY NETWORK.  The Adaptive Resonance Theory (ART) was incorporated as a hypothesis for human cognitive data handling.  The hypothesis has prompted neural models for pattern recognition and unsupervised learning. ART system has been utilized to clarify different types of cognitive and brain data.  The Adaptive Resonance Theory addresses the stability-plasticity (stability can be defined as the nature of memorizing the learning and plasticity refers to the fact that they are flexible to gain new information) dilemma of a system that asks how learning can proceed in response to huge input patterns and simultaneously not to lose the stability for irrelevant patterns.  Other than that, the stability-elasticity dilemma is concerned about how a system can adapt new data while keeping what was learned before.  For such a task, a feedback mechanism is included among the ART neural network layers.
  • 24. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani  In this neural network, the data in the form of processing elements output reflects back and ahead among layers. If an appropriate pattern is build-up, the resonance is reached, then adaption can occur during this period.  It can be defined as the formal analysis of how to overcome the learning instability accomplished by a competitive learning model, let to the presentation of an expended hypothesis, called adaptive resonance theory (ART).  This formal investigation indicated that a specific type of top-down learned feedback and matching mechanism could significantly overcome the instability issue.  It was understood that top-down attentional mechanisms, which had prior been found through an investigation of connections among cognitive and reinforcement mechanisms, had similar characteristics as these code-stabilizing mechanisms.  In other words, once it was perceived how to solve the instability issue formally, it also turned out to be certain that one did not need to develop any quantitatively new mechanism to do so.  One only needed to make sure to incorporate previously discovered attentional mechanisms. These additional mechanisms empower code learning to self- stabilize in response to an essentially arbitrary input system.  Grossberg presented the basic principles of the adaptive resonance theory.  A category of ART called ART1 has been described as an arrangement of ordinary differential equations by carpenter and Grossberg. These theorems can predict both the order of search as the function of the learning history of the system and the input patterns.  ART1 is an unsupervised learning model primarily designed for recognizing binary patterns.  It comprises an attentional subsystem, an orienting subsystem, a vigilance parameter, and a reset module, as given in the figure given below.  The vigilance parameter has a huge effect on the system. High vigilance produces higher detailed memories.  The ART1 attentional comprises of two competitive networks, comparison field layer L1 and the recognition field layer L2, two control gains, Gain1 and Gain2, and two short-term memory (STM) stages S1 and S2.  Long term memory (LTM) follows somewhere in the range of S1 and S2 multiply the signal in these pathways.
  • 25. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani  Gains control empowers L1 and L2 to recognize the current stages of the running cycle. STM reset wave prevents active L2 cells when mismatches between bottom-up and top-down signals happen at L1.  The comparison layer gets the binary external input passing it to the recognition layer liable for coordinating it to a classification category.  This outcome is given back to the comparison layer to find out when the category coordinates the input vector.  If there is a match, then a new input vector is read, and the cycle begins once again. If there is a mismatch, then the orienting system is in charge of preventing the previous category from getting a new category match in the recognition layer.  The given two gains control the activity of the recognition and the comparison layer, respectively.  The reset wave specifically and enduringly prevents active L2 cell until the current is stopped. The offset of the input pattern ends its processing L1 and triggers the offset of Gain2. Gain2 offset causes consistent decay of STM at L2 and thereby prepares L2 to encode the next input pattern without bias.
  • 26. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani ART1 Implementation process:  ART1 is a self-organizing neural network having input and output neurons mutually couple using bottom-up and top-down adaptive weights that perform recognition.  To start our methodology, the system is first trained as per the adaptive resonance theory by inputting reference pattern data under the type of 5*5 matrix into the neurons for clustering within the output neurons.  Next, the maximum number of nodes in L2 is defined following by the vigilance parameter. The inputted pattern enrolled itself as short term memory activity over a field of nodes L1.  Combining and separating pathways from L1 to coding field L2, each weighted by an adaptive long-term memory track, transform into a net signal vector T.  Internal competitive dynamics at L2 further transform T, creating a compressed code or content addressable memory.  With strong competition, activation is concentrated at the L2 node that gets the maximal L1 → L2 signal.  The primary objective of this work is divided into four phases as follows Comparison, recognition, search, and learning. Advantage of adaptive learning theory (ART):  It can be coordinated and utilized with different techniques to give more precise outcomes.  It doesn't ensure stability in forming clusters.
  • 27. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani  It can be used in different fields such as face recognition, embedded system, and robotics, target recognition, medical diagnosis, signature verification, etc.  It shows stability and is not disturbed by a wide range of inputs provided to inputs.  It has got benefits over competitive learning. The competitive learning cant include new clusters when considered necessary. Application of ART:  ART stands for Adaptive Resonance Theory. ART neural networks used for fast, stable learning and prediction have been applied in different areas.  The application incorporates target recognition, face recognition, medical diagnosis, signature verification, mobile control robot. i. Target recognition:  Fuzzy ARTMAP neural network can be used for automatic classification of targets depend on their radar range profiles.  Tests on synthetic data show the fuzzy ARTMAP can result in substantial savings in memory requirements when related to k nearest neighbor (kNN) classifiers.  The utilization of multi wavelength profiles mainly improves the performance of both kinds of classifiers.
  • 28. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani ii. Medical diagnosis:  Medical databases present huge numbers of challenges found in general information management settings where speed, use, efficiency, and accuracy are the prime concerns.  A direct objective of improved computer-assisted medicine is to help to deliver intensive care in situations that may be less than ideal.  Working with these issues has stimulated several ART architecture developments, including ARTMAP-IC. iii. Signature verification:  Automatic signature verification is a well-known and active area of research with various applications such as bank check confirmation, ATM access, etc.  The training of the network is finished using ART1 that uses global features as input vector and the verification and recognition phase uses a two-step process.  In the initial step, the input vector is coordinated with the stored reference vector, which was used as a training set, and in the second step, cluster formation takes place. iv. Mobile control robot:  Nowadays, we perceive a wide range of robotic devices. It is still a field of research in their program part, called artificial intelligence.  The human brain is an interesting subject as a model for such an intelligent system. Inspired by the structure of the human brain, an artificial neural emerges.  Similar to the brain, the artificial neural network contains numerous simple computational units, neurons that are interconnected mutually to allow the transfer of the signal from the neurons to neurons.  Artificial neural networks are used to solve different issues with good outcomes compared to other decision algorithms. Limitations of ART: Some ART networks are contradictory as they rely on the order of the training data, or upon the learning rate.
  • 29. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani 2 MARKS QUESTIONS AND ANSWERS 1. What is recall? Ans: If the input vector are uncorrelated, the Hebb rule will produce the correct weight and the response of the net when tested with one of the training vectors will be perfect recall. 2. Explain Learning vector Quantization. Ans: LVQ is adaptive data classification method. It is based on training data with desired class information. LVQ uses unsupervised data clustering techniques to preprocessing the data set and obtain cluster center. 3. What is meant by associative memory? Ans: Associative memory is also known as content addressable memory (CAM) or associative storage or associative array. It is a special type of memory that is optimized for performing searches through data, as opposed to providing a simple direct access to the data based on the address. It can store the set of patterns as memories when the associative memory is being presented with a key pattern, it responds by producing one of the stored pattern which closely resembles or relates to the key pattern. It can be viewed as data correlation here. Input data is correlated with that of stored data in the CAM. 4. Define auto associative memory. Ans: An auto-associative memory network, also known as a recurrent neural network, is a type of associative memory that is used to recall a pattern from partial or degraded inputs. In an auto-associative network, the output of the network is fed back into the input, allowing the network to learn and remember the patterns it has been trained on. This type of memory network is commonly used in applications such as speech and image recognition, where the input data may be incomplete or noisy. 5. What is Hebbian Learning? Ans: Hebb rule is the simplest and most common method of determining weights for an associative memory neural net. It can be us with patterns are represented as either binary or bipolar vectors. 6. What is Bidirectional Associative memory (BAM)? Ans: Bidirectional Associative Memory (BAM) is a type of artificial neural network designed for storing and retrieving heterogeneous pattern pairs. It plays a crucial role in various applications, such as password authentication, neural network models, and cognitive management.
  • 30. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani 7. List the problems of BAM Network. Ans:  Storage capacity of the BAM: In the BAM, stored number of associations should not be exceeded the number of neurons in the smaller layer.  Incorrect convergence: Always the closest association may not be produced by BAM. 8. What is content addressable memory? Ans: Content-addressable memory (CAM) is a special type of computer memory used in certain very- high-speed searching applications.  It is also known as associative memory or associative storage and compares input search data against a table of stored data, and returns the address of matching data. 9. What are the delta rule for pattern association? Ans: The delta rule is typically applied to the case in which pairs of pat- terns, consisting of an input pattern and a target output pattern, are to be associated.  When an input pattern is presented to an input layer of units, the appropriate output pattern will appear on the output layer of units. 10. What is continuous BAM? Ans: Continuous BAM transforms input smoothly and continuously into output in the range [0, 1] using the logistic sigmoid function as the activation function for all units. 11. Which are the rules used in Hebb’s law? Ans: i. If two neurons on either side of a connection are activated synchronously, then the weight of that connection is increased. ii. If two neurons on either side of a connection are activated asynchronously, then the weight of that connection is decreased. 12. What do you mean counter Propagation network? Ans: Counter propagation Network is supervised neural network that can be used for multimodal processing, but is not trained using the back propagation rule is the counter propagation network. This network has been specifically developed to provide bidirectional mapping between input and output training patterns. 13. What is Hopfield model? Ans: The Hopfield model is a single layered recurrent network. Like the associative memory, it is usually initialized with appropriate weights instead of being trained.
  • 31. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani 14. Define Self-Organizing Map. Ans: Self Organizing Map (or Kohonen Map or SOM) is a type of Artificial Neural Network which is also inspired by biological models of neural systems from the 1970s.  It follows an unsupervised learning approach and trained its network through a competitive learning algorithm.  SOM is used for clustering and mapping (or dimensionality reduction) techniques to map multidimensional data onto lower-dimensional which allows people to reduce complex problems for easy interpretation. 15. What is principle goal of the self-organizing map? Ans: The principal goal of the Self Organizing Map (SOM) is to transform an incoming signal pattern of arbitrary dimension into a one or two dimensional discrete map and to perform this transformation adaptively in a topologically ordered fashion. 16. List the stags of the SOM algorithm. Ans: i. Initialization: Choose random values for the initial weight vectors wj. ii. Sampling: Draw a sample training input vector x form the input space. iii. Matching: Find the winner neuron I(x) with weight vector closest to input vector. iv. Updating: Apply the weight update equation. v. Continuation: Keep returning to step 2 until the feature map stops changing. 17. How does counter Propagation nets are trained? Ans: Counter Propagation nets are trained in two stages: i. First Stage: The input vectors are clustered. The Clusters that are formed may be based on either the dot product metric or the Euclidean norm metric. ii. Second stage: The weight from the cluster units to the output units are adapted to produce the desired response. 18. List the possible drawback of counter propagation networks. Ans:  Training a counter Propagation network has the same difficulty associate with training a Kohonen network.
  • 32. CCS355 NN & DL Department of IT Asst.Prof.M.gokilavani  Counter Propagation networks tend to be larger than back propagation networks. If a certain number of mapping are to be learned, the middle layer must have that many number of neurons. 19. How forward only differs from full counter propagation nets? Ans:  In full counter propagation only the X vectors to form the clusters on the Kohonen units during the first stage of training.  The original presentation of forward only counter propagation used the Euclidean distance between the input vector and the weight vector for the Kohonen unit. 20. What is forward only counter propagation? Ans:  Is a simplified version of the full counter propagation.  Are intended to approximate y=f(x) function that is not necessarily invertible.  It may be used if the mapping from x to y is well define, but the mapping from y to x is not. 21. Define plasticity. Ans: The ability of a net to respond to learn a new pattern equally well at any stage of learning called plasticity. 22. List the components of ART1. Ans: Components are as follows:  The Short term memory layer (F1).  The recognition layer (F2): It contains the long term memory of the system.  Vigilance parameter (p): A parameter that control the generality of the memory. Larger p means more detailed memories. Smaller p produces more general memories.