SlideShare a Scribd company logo
PRESENTED BY:
ER.Abhishek k. upadhyay
ECE(REGULAR),2015
6/4/2015 1
 A neural network is a processing device, whose design was
inspired by the design and functioning of human brain and
their components.
 Different neural network algorithms are used for
recognizing the pattern.
 Various algorithms differ in their learning mechanism.
 All learning methods used for adaptive neural networks
can be classified into two major categories:
 Supervised learning
 Unsupervised learning
6/4/2015 2
 Its capability for solving complex pattern recognition
problems:-
 Noise in weights
 Noise in inputs
 Loss of connections
 Missing information and adding information.
6/4/2015 3
 The primary function of which is to retrieve in a pattern
stored in memory, when an incomplete or noisy version of
that pattern is presented.
 This is a two layer classifier of binary bipolar vectors.
 The first layer of hamming network itself is capable of
selecting the stored class that is at minimum HD value to
the test vector presented at the input.
 The second layer MAXNET only suppresses outputs.
6/4/2015 4
6/4/2015 5
 The hamming network is of the feed forward type. The
number of output neurons in this part equals the number
of classes.
 The strongest response of a neuron of this layer indicated
the minimum HD value between the input vector and the
class this neuron represents.
 The second layer is MAXNET, which operates as a recurrent
network. It involves both excitatory and inhibitory
connections.
6/4/2015 6
6/4/2015 7
 The purpose of the layer is to compute, in a feed forward
manner, the values of (n-HD).
 Where HD is the hamming distance between the search
argument and the encoded class prototype vector.
 For the Hamming net, we have input vector X
p classes => p neurons for output
output vector Y = [y1,……yp]
6/4/2015 8
 for any output neuron ,m, m=1, ……p, we have
Wm = [wm1, wm2,……wmn]t and
m=1,2,……p
to be the weights between input X and each output
neuron.
 Also, assuming that for each class m, one has the
prototype vector S(m) as the standard to be matched.
6/4/2015 9
 For classifying p classes, one can say the m’th output is 1 if
and only if
 output for the classifier are
XtS(1), XtS(2),…XtS(m),…XtS(p)
 So when X= S(m), the m’th output is n and other outputs
are smaller than n.
X= S(m) W(m) =S(m)
=> happens only
6/4/2015 10
Xt S(m) = (n - HD(X , S(m)) ) - HD(X , S(m))
∴½ XtS(m) = n/2 – HD(X , S(m))
So the weight matrix:
WH=½S















)()(
2
)(
1
)2()2(
2
)2(
1
)1()1(
2
)1(
1
2
1
p
n
pp
n
n
H
SSS
SSS
SSS
W




6/4/2015 11
 By giving a fixed bias n/2 to the input
then
netm = ½XtS(m) + n/2 for m=1,2,……p
or
netm = n - HD(X , S(m))
 To scale the input 0~n to 0~1 down, one can apply
transfer function as
f(netm) = 1/n(netm) for m=1,2,…..p
6/4/2015 12
6/4/2015 13
 So for the node with the the highest output means that
the node has smallest HD between input and prototype
vectors S(1)……S(m)
i.e.
f(netm) = 1
for other nodes
f(netm) < 1
 The purpose of MAXNET is to let max{ y1,……yp }
equal to 1 and let others equal to 0.
6/4/2015 14
6/4/2015 15
 So
ε is bounded by 0<ε<1/p and
 ε: lateral interaction coefficient
)(
1
1
1
1
pp
MW

































6/4/2015 16
 And
 So the transfer function






0
00
)(
netnet
net
netf
6/4/2015 17
 kk
k
M
k
netfY
YWnet


1
 Each entry of the updated vector decreases at the k’th
recursion step under the MAXNET update algorithm,
with the largest entry decreasing slowest.
6/4/2015 18
 Step 1: Consider that patterns to classified are a1, a2 …
ap,each pattern is n dimensional. The weights connecting
inputs to the neuron of hamming network is given by
weight matrix.















pmpp
n
n
H
aaa
aaa
aaa
W




21
22121
11211
2
1
6/4/2015 19
 Step2: n-dimensional input vector x is presented to the
input.
 Step3: Net input of each neuron of hamming network is
netm = ½XtS(m) + n/2 for m=1,2,……p
Where n/2 is fixed bias applied to the input of each neuron
of this layer.
 Step 4: Out put of each neuron of first layer is,
f(netm) =1/n( netm) for m=1,2,…..p
6/4/2015 20
 Step 5: Output of hamming network is applied as input to
MAXNET
y0=f(netm)
 Step 6: Weights connecting neurons of hamming
network and MAXNET is taken as,
)(
1
1
1
1
pp
MW

































6/4/2015 21
 Where ε must be bounded 0< ε <1/p. the quantity ε can be
called the literal interaction coefficient. Dimension of WM
is p×p.
 Step 7: The output of MAXNET is calculated as,
 k=1, 2, 3…… denotes the no of iterations.






0
00
)(
netnet
net
netf
 k1k
k
M
k
netfY
YWnet



6/4/2015 22
 Ex: To have a Hamming Net for classifying C , I , T
then
S(1) = [ 1 1 1 1 -1 -1 1 1 1 ]t
S(2) = [ -1 1 -1 -1 1 -1 -1 1 -1 ]t
S(3) = [ 1 1 1 -1 1 -1 -1 1 -1 ]t
 So,
6/4/2015 23














111111111
111111111
111111111
HW
6/4/2015 24
 For
 And
6/4/2015 25
22
1 n
XWnet H 
 
Y
netf
t








9
5
9
3
9
7
 
 t
t
net
X
537
111111111


 Input to MAXNET and select =0.2 < 1/3(=1/p)
 So,
 And
6/4/2015 26




















1
5
1
5
1
5
1
1
5
1
5
1
5
1
1
MW
 kk
k
M
k
netfY
YWnet


1
 K=o
6/4/2015 27
 















































333.0
067.0
599.0
333.0
067.0
599.0
555.0
333.0
777.0
12.02.0
2.012.0
2.02.01
01
0
netfY
o
net
 K=1
 K=2
6/4/2015 28
  t
t
Y
net
2
0
1
120.00520.0
120.0120.0520.0














  t
t
Y
net
3
0
2
096.00480.0
096.014.0480.0














 K=3
 The result computed by the network after four
recurrences indicates the vector x presented at i/p for
mini hamming distance has been at the smallest HD
from s1.
 So, it represents the distorted character C.
6/4/2015 29
  t
t
Y
net
4
7
0
3
00461.0
10115.0461.0











 


 Noise is introduced in the input by adding random
numbers.
 Hamming Network and MaxNet network recognizes
correctly all the stored strings even after introducing noise
at the time of testing.
6/4/2015 30
 In the network, neurons are interconnected and every
interconnection has some interconnecting coefficient
called weight.
 If some of these weights are equated to zero then how it is
going to effect the classification or recognition.
 The number of connections that can be removed such that
the network performance is not affected.
6/4/2015 31
 Missing information means some of the on pixels in
pattern grid are made off.
 For the algorithm, how many information we can miss so
that the strings can be recognized correctly varies from
string to string.
 The number of pixels that can be switched off for all the
stored strings in algorithm.
6/4/2015 32
 Adding information means some of the off pixels in the
pattern grid are made on.
 The number of pixels that can be made on for all the strings
that can be stored in networks.
6/4/2015 33
 The network architecture is very simple.
 This network is a counter part of Hopfield auto associative
network.
 The advantage of this network is that it involves less
number of neurons and less number of connections in
comparison to its counter part.
 There is no capacity limitation.
6/4/2015 34
 The hamming network retrieves only the closest class index
and not the entire prototype vector.
 It is not able to restore any of the key patterns. It provides
passive classification only.
 This network does not have any mechanism for data
restoration.
 It’s not to restore distorted pattern.
6/4/2015 35
 Jacek M. Zurada, “Introduction to artificial Neural
Systems”, Jaico Publication House. New Delhi, INDIA
 Amit Kumar Gupta, Yash Pal Singh, “Analysis of Hamming
Network and MAXNET of Neural Network method in the
String Recognition”, IEEE ,2011.
 C.M. Bishop, Neural Networks for Pattern Recognition,
Oxford University Press, Oxford, 2003.
6/4/2015 36
6/4/2015 37

More Related Content

What's hot (20)

PDF
Fundamentals of Neural Network (Soft Computing)
Amit Kumar Rathi
 
PPTX
Intelligent Agents
Amar Jukuntla
 
PDF
Introduction to Transformers for NLP - Olga Petrova
Alexey Grigorev
 
PDF
Region Splitting and Merging Technique For Image segmentation.
SomitSamanto1
 
PPT
visible surface detection
Balakumaran Arunachalam
 
PPSX
Perceptron (neural network)
EdutechLearners
 
PPTX
Communication costs in parallel machines
Syed Zaid Irshad
 
PPTX
Resnet.pptx
YanhuaSi
 
PPTX
Logics for non monotonic reasoning-ai
ShaishavShah8
 
PPTX
Stream oriented communication
Shyama Bhuvanendran
 
PPTX
Recurrent Neural Network
Mohammad Sabouri
 
PPTX
An overview of gradient descent optimization algorithms
Hakky St
 
PPTX
First order logic
Megha Sharma
 
PPTX
Long Short Term Memory LSTM
Abdullah al Mamun
 
PPTX
Firewall Basing
Pina Parmar
 
PDF
Recurrent Neural Networks, LSTM and GRU
ananth
 
PPTX
Predicate logic
Harini Balamurugan
 
PPTX
Adaptive Resonance Theory
surat murthy
 
Fundamentals of Neural Network (Soft Computing)
Amit Kumar Rathi
 
Intelligent Agents
Amar Jukuntla
 
Introduction to Transformers for NLP - Olga Petrova
Alexey Grigorev
 
Region Splitting and Merging Technique For Image segmentation.
SomitSamanto1
 
visible surface detection
Balakumaran Arunachalam
 
Perceptron (neural network)
EdutechLearners
 
Communication costs in parallel machines
Syed Zaid Irshad
 
Resnet.pptx
YanhuaSi
 
Logics for non monotonic reasoning-ai
ShaishavShah8
 
Stream oriented communication
Shyama Bhuvanendran
 
Recurrent Neural Network
Mohammad Sabouri
 
An overview of gradient descent optimization algorithms
Hakky St
 
First order logic
Megha Sharma
 
Long Short Term Memory LSTM
Abdullah al Mamun
 
Firewall Basing
Pina Parmar
 
Recurrent Neural Networks, LSTM and GRU
ananth
 
Predicate logic
Harini Balamurugan
 
Adaptive Resonance Theory
surat murthy
 

Similar to Nural network ER.Abhishek k. upadhyay (20)

PDF
Secured transmission through multi layer perceptron in wireless communication...
ijmnct
 
PDF
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
IJECEIAES
 
PDF
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...
csandit
 
PDF
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
cscpconf
 
PDF
X trepan an extended trepan for
ijaia
 
PDF
Multilayer Perceptron Guided Key Generation through Mutation with Recursive R...
pijans
 
PPTX
Neural networks
HarshitGupta367
 
PDF
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS
IJCI JOURNAL
 
DOC
HW2-1_05.doc
butest
 
PDF
COMPARISON OF WAVELET NETWORK AND LOGISTIC REGRESSION IN PREDICTING ENTERPRIS...
ijcsit
 
PDF
A genetic algorithm to solve the
IJCNCJournal
 
PDF
Incorporating Kalman Filter in the Optimization of Quantum Neural Network Par...
Waqas Tariq
 
PDF
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
cscpconf
 
PDF
Iaetsd a novel scheduling algorithms for mimo based wireless networks
Iaetsd Iaetsd
 
PPT
Machine Learning Neural Networks Artificial
webinartrainer
 
PPT
Machine Learning Neural Networks Artificial Intelligence
webinartrainer
 
PDF
N ns 1
Thy Selaroth
 
PDF
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
PDF
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
PDF
Feed forward neural network for sine
ijcsa
 
Secured transmission through multi layer perceptron in wireless communication...
ijmnct
 
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...
IJECEIAES
 
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...
csandit
 
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...
cscpconf
 
X trepan an extended trepan for
ijaia
 
Multilayer Perceptron Guided Key Generation through Mutation with Recursive R...
pijans
 
Neural networks
HarshitGupta367
 
SLIDING WINDOW SUM ALGORITHMS FOR DEEP NEURAL NETWORKS
IJCI JOURNAL
 
HW2-1_05.doc
butest
 
COMPARISON OF WAVELET NETWORK AND LOGISTIC REGRESSION IN PREDICTING ENTERPRIS...
ijcsit
 
A genetic algorithm to solve the
IJCNCJournal
 
Incorporating Kalman Filter in the Optimization of Quantum Neural Network Par...
Waqas Tariq
 
Economic Load Dispatch (ELD), Economic Emission Dispatch (EED), Combined Econ...
cscpconf
 
Iaetsd a novel scheduling algorithms for mimo based wireless networks
Iaetsd Iaetsd
 
Machine Learning Neural Networks Artificial
webinartrainer
 
Machine Learning Neural Networks Artificial Intelligence
webinartrainer
 
N ns 1
Thy Selaroth
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
aciijournal
 
Feed forward neural network for sine
ijcsa
 
Ad

More from abhishek upadhyay (14)

PPT
Nural network ER. Abhishek k. upadhyay Learning rules
abhishek upadhyay
 
PPT
Nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
PPT
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
PPT
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
DOCX
bi copter Major project report ER.Abhishek upadhyay b.tech (ECE)
abhishek upadhyay
 
DOCX
A project report on
abhishek upadhyay
 
PPTX
Oc ppt
abhishek upadhyay
 
PPT
abhishek
abhishek upadhyay
 
PPT
(1) nanowire battery gerling (4)
abhishek upadhyay
 
DOCX
moving message display of lcd
abhishek upadhyay
 
PPT
Bluetooth
abhishek upadhyay
 
DOC
Khetarpal
abhishek upadhyay
 
Nural network ER. Abhishek k. upadhyay Learning rules
abhishek upadhyay
 
Nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
nural network ER. Abhishek k. upadhyay
abhishek upadhyay
 
bi copter Major project report ER.Abhishek upadhyay b.tech (ECE)
abhishek upadhyay
 
A project report on
abhishek upadhyay
 
(1) nanowire battery gerling (4)
abhishek upadhyay
 
moving message display of lcd
abhishek upadhyay
 
Ad

Recently uploaded (20)

PDF
7.2 Physical Layer.pdf123456789101112123
MinaMolky
 
PPTX
00-ClimateChangeImpactCIAProcess_PPTon23.12.2024-ByDr.VijayanGurumurthyIyer1....
praz3
 
PDF
Introduction to Robotics Mechanics and Control 4th Edition by John J. Craig S...
solutionsmanual3
 
PDF
Web Technologies - Chapter 3 of Front end path.pdf
reemaaliasker
 
PDF
The Complete Guide to the Role of the Fourth Engineer On Ships
Mahmoud Moghtaderi
 
PDF
2025 Laurence Sigler - Advancing Decision Support. Content Management Ecommer...
Francisco Javier Mora Serrano
 
PDF
SMART HOME AUTOMATION PPT BY - SHRESTH SUDHIR KOKNE
SHRESTHKOKNE
 
PDF
Farm Machinery and Equipments Unit 1&2.pdf
prabhum311
 
PDF
Zero carbon Building Design Guidelines V4
BassemOsman1
 
PPTX
NEBOSH HSE Process Safety Management Element 1 v1.pptx
MohamedAli92947
 
PDF
July 2025 - Top 10 Read Articles in Network Security & Its Applications.pdf
IJNSA Journal
 
PPTX
Ground improvement techniques-DEWATERING
DivakarSai4
 
PPTX
Fluid statistics and Numerical on pascal law
Ravindra Kolhe
 
PDF
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
PDF
NOISE CONTROL ppt - SHRESTH SUDHIR KOKNE
SHRESTHKOKNE
 
PPTX
Water resources Engineering GIS KRT.pptx
Krunal Thanki
 
PPTX
UNIT III CONTROL OF PARTICULATE CONTAMINANTS
sundharamm
 
PDF
CFM 56-7B - Engine General Familiarization. PDF
Gianluca Foro
 
PDF
All chapters of Strength of materials.ppt
girmabiniyam1234
 
PPTX
cybersecurityandthe importance of the that
JayachanduHNJc
 
7.2 Physical Layer.pdf123456789101112123
MinaMolky
 
00-ClimateChangeImpactCIAProcess_PPTon23.12.2024-ByDr.VijayanGurumurthyIyer1....
praz3
 
Introduction to Robotics Mechanics and Control 4th Edition by John J. Craig S...
solutionsmanual3
 
Web Technologies - Chapter 3 of Front end path.pdf
reemaaliasker
 
The Complete Guide to the Role of the Fourth Engineer On Ships
Mahmoud Moghtaderi
 
2025 Laurence Sigler - Advancing Decision Support. Content Management Ecommer...
Francisco Javier Mora Serrano
 
SMART HOME AUTOMATION PPT BY - SHRESTH SUDHIR KOKNE
SHRESTHKOKNE
 
Farm Machinery and Equipments Unit 1&2.pdf
prabhum311
 
Zero carbon Building Design Guidelines V4
BassemOsman1
 
NEBOSH HSE Process Safety Management Element 1 v1.pptx
MohamedAli92947
 
July 2025 - Top 10 Read Articles in Network Security & Its Applications.pdf
IJNSA Journal
 
Ground improvement techniques-DEWATERING
DivakarSai4
 
Fluid statistics and Numerical on pascal law
Ravindra Kolhe
 
20ME702-Mechatronics-UNIT-1,UNIT-2,UNIT-3,UNIT-4,UNIT-5, 2025-2026
Mohanumar S
 
NOISE CONTROL ppt - SHRESTH SUDHIR KOKNE
SHRESTHKOKNE
 
Water resources Engineering GIS KRT.pptx
Krunal Thanki
 
UNIT III CONTROL OF PARTICULATE CONTAMINANTS
sundharamm
 
CFM 56-7B - Engine General Familiarization. PDF
Gianluca Foro
 
All chapters of Strength of materials.ppt
girmabiniyam1234
 
cybersecurityandthe importance of the that
JayachanduHNJc
 

Nural network ER.Abhishek k. upadhyay

  • 1. PRESENTED BY: ER.Abhishek k. upadhyay ECE(REGULAR),2015 6/4/2015 1
  • 2.  A neural network is a processing device, whose design was inspired by the design and functioning of human brain and their components.  Different neural network algorithms are used for recognizing the pattern.  Various algorithms differ in their learning mechanism.  All learning methods used for adaptive neural networks can be classified into two major categories:  Supervised learning  Unsupervised learning 6/4/2015 2
  • 3.  Its capability for solving complex pattern recognition problems:-  Noise in weights  Noise in inputs  Loss of connections  Missing information and adding information. 6/4/2015 3
  • 4.  The primary function of which is to retrieve in a pattern stored in memory, when an incomplete or noisy version of that pattern is presented.  This is a two layer classifier of binary bipolar vectors.  The first layer of hamming network itself is capable of selecting the stored class that is at minimum HD value to the test vector presented at the input.  The second layer MAXNET only suppresses outputs. 6/4/2015 4
  • 6.  The hamming network is of the feed forward type. The number of output neurons in this part equals the number of classes.  The strongest response of a neuron of this layer indicated the minimum HD value between the input vector and the class this neuron represents.  The second layer is MAXNET, which operates as a recurrent network. It involves both excitatory and inhibitory connections. 6/4/2015 6
  • 8.  The purpose of the layer is to compute, in a feed forward manner, the values of (n-HD).  Where HD is the hamming distance between the search argument and the encoded class prototype vector.  For the Hamming net, we have input vector X p classes => p neurons for output output vector Y = [y1,……yp] 6/4/2015 8
  • 9.  for any output neuron ,m, m=1, ……p, we have Wm = [wm1, wm2,……wmn]t and m=1,2,……p to be the weights between input X and each output neuron.  Also, assuming that for each class m, one has the prototype vector S(m) as the standard to be matched. 6/4/2015 9
  • 10.  For classifying p classes, one can say the m’th output is 1 if and only if  output for the classifier are XtS(1), XtS(2),…XtS(m),…XtS(p)  So when X= S(m), the m’th output is n and other outputs are smaller than n. X= S(m) W(m) =S(m) => happens only 6/4/2015 10
  • 11. Xt S(m) = (n - HD(X , S(m)) ) - HD(X , S(m)) ∴½ XtS(m) = n/2 – HD(X , S(m)) So the weight matrix: WH=½S                )()( 2 )( 1 )2()2( 2 )2( 1 )1()1( 2 )1( 1 2 1 p n pp n n H SSS SSS SSS W     6/4/2015 11
  • 12.  By giving a fixed bias n/2 to the input then netm = ½XtS(m) + n/2 for m=1,2,……p or netm = n - HD(X , S(m))  To scale the input 0~n to 0~1 down, one can apply transfer function as f(netm) = 1/n(netm) for m=1,2,…..p 6/4/2015 12
  • 14.  So for the node with the the highest output means that the node has smallest HD between input and prototype vectors S(1)……S(m) i.e. f(netm) = 1 for other nodes f(netm) < 1  The purpose of MAXNET is to let max{ y1,……yp } equal to 1 and let others equal to 0. 6/4/2015 14
  • 16.  So ε is bounded by 0<ε<1/p and  ε: lateral interaction coefficient )( 1 1 1 1 pp MW                                  6/4/2015 16
  • 17.  And  So the transfer function       0 00 )( netnet net netf 6/4/2015 17  kk k M k netfY YWnet   1
  • 18.  Each entry of the updated vector decreases at the k’th recursion step under the MAXNET update algorithm, with the largest entry decreasing slowest. 6/4/2015 18
  • 19.  Step 1: Consider that patterns to classified are a1, a2 … ap,each pattern is n dimensional. The weights connecting inputs to the neuron of hamming network is given by weight matrix.                pmpp n n H aaa aaa aaa W     21 22121 11211 2 1 6/4/2015 19
  • 20.  Step2: n-dimensional input vector x is presented to the input.  Step3: Net input of each neuron of hamming network is netm = ½XtS(m) + n/2 for m=1,2,……p Where n/2 is fixed bias applied to the input of each neuron of this layer.  Step 4: Out put of each neuron of first layer is, f(netm) =1/n( netm) for m=1,2,…..p 6/4/2015 20
  • 21.  Step 5: Output of hamming network is applied as input to MAXNET y0=f(netm)  Step 6: Weights connecting neurons of hamming network and MAXNET is taken as, )( 1 1 1 1 pp MW                                  6/4/2015 21
  • 22.  Where ε must be bounded 0< ε <1/p. the quantity ε can be called the literal interaction coefficient. Dimension of WM is p×p.  Step 7: The output of MAXNET is calculated as,  k=1, 2, 3…… denotes the no of iterations.       0 00 )( netnet net netf  k1k k M k netfY YWnet    6/4/2015 22
  • 23.  Ex: To have a Hamming Net for classifying C , I , T then S(1) = [ 1 1 1 1 -1 -1 1 1 1 ]t S(2) = [ -1 1 -1 -1 1 -1 -1 1 -1 ]t S(3) = [ 1 1 1 -1 1 -1 -1 1 -1 ]t  So, 6/4/2015 23               111111111 111111111 111111111 HW
  • 25.  For  And 6/4/2015 25 22 1 n XWnet H    Y netf t         9 5 9 3 9 7    t t net X 537 111111111  
  • 26.  Input to MAXNET and select =0.2 < 1/3(=1/p)  So,  And 6/4/2015 26                     1 5 1 5 1 5 1 1 5 1 5 1 5 1 1 MW  kk k M k netfY YWnet   1
  • 27.  K=o 6/4/2015 27                                                  333.0 067.0 599.0 333.0 067.0 599.0 555.0 333.0 777.0 12.02.0 2.012.0 2.02.01 01 0 netfY o net
  • 28.  K=1  K=2 6/4/2015 28   t t Y net 2 0 1 120.00520.0 120.0120.0520.0                 t t Y net 3 0 2 096.00480.0 096.014.0480.0              
  • 29.  K=3  The result computed by the network after four recurrences indicates the vector x presented at i/p for mini hamming distance has been at the smallest HD from s1.  So, it represents the distorted character C. 6/4/2015 29   t t Y net 4 7 0 3 00461.0 10115.0461.0               
  • 30.  Noise is introduced in the input by adding random numbers.  Hamming Network and MaxNet network recognizes correctly all the stored strings even after introducing noise at the time of testing. 6/4/2015 30
  • 31.  In the network, neurons are interconnected and every interconnection has some interconnecting coefficient called weight.  If some of these weights are equated to zero then how it is going to effect the classification or recognition.  The number of connections that can be removed such that the network performance is not affected. 6/4/2015 31
  • 32.  Missing information means some of the on pixels in pattern grid are made off.  For the algorithm, how many information we can miss so that the strings can be recognized correctly varies from string to string.  The number of pixels that can be switched off for all the stored strings in algorithm. 6/4/2015 32
  • 33.  Adding information means some of the off pixels in the pattern grid are made on.  The number of pixels that can be made on for all the strings that can be stored in networks. 6/4/2015 33
  • 34.  The network architecture is very simple.  This network is a counter part of Hopfield auto associative network.  The advantage of this network is that it involves less number of neurons and less number of connections in comparison to its counter part.  There is no capacity limitation. 6/4/2015 34
  • 35.  The hamming network retrieves only the closest class index and not the entire prototype vector.  It is not able to restore any of the key patterns. It provides passive classification only.  This network does not have any mechanism for data restoration.  It’s not to restore distorted pattern. 6/4/2015 35
  • 36.  Jacek M. Zurada, “Introduction to artificial Neural Systems”, Jaico Publication House. New Delhi, INDIA  Amit Kumar Gupta, Yash Pal Singh, “Analysis of Hamming Network and MAXNET of Neural Network method in the String Recognition”, IEEE ,2011.  C.M. Bishop, Neural Networks for Pattern Recognition, Oxford University Press, Oxford, 2003. 6/4/2015 36