0% found this document useful (0 votes)
31 views

Outline: - Competitive Networks

The document discusses different types of competitive neural networks including Hamming networks, self-organizing feature maps, and learning vector quantization. A Hamming network uses an instar layer and competitive layer to classify inputs. Self-organizing feature maps use competitive learning with a neighborhood function to cluster inputs. Learning vector quantization adds supervision to competitive learning to classify inputs into target classes.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views

Outline: - Competitive Networks

The document discusses different types of competitive neural networks including Hamming networks, self-organizing feature maps, and learning vector quantization. A Hamming network uses an instar layer and competitive layer to classify inputs. Self-organizing feature maps use competitive learning with a neighborhood function to cluster inputs. Learning vector quantization adds supervision to competitive learning to classify inputs into target classes.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 24

Outline

• Competitive Networks
– Hamming network
– Self-organizing feature maps
– Learning vector quantization
Hamming Network

08/05/20 CAP 5615: Introduction to Neural Networks 2


Hamming Network – cont.
• Layer 1
– Consists of multiple instar networks to recognize
more than one pattern
– The output of a neuron is the inner production
between the weight vector (prototype) and the
input vector
• The output from the first layer indicates the
correlation between the prototype pattern and the
input vector
– It is feedforward
08/05/20 CAP 5615: Introduction to Neural Networks 3
Hamming Network – cont.
• Layer 2
– It is a recurrent network, called a competitive
network
– The neurons in this layer compete with each
other to determine a winner
• After competition, only one neuron will have a
nonzero output
• The winning neuron indicates which category of input
was presented to the network

08/05/20 CAP 5615: Introduction to Neural Networks 4


Hamming Network – cont.
• Layer 2 – cont.
– The second layer output is updated
a 2 (t  1)  poslin(W 2 a 2 (t ))
– The weights are set as follows
2  1, if i  j 1
wij   , where 0   
  , otherwise S 1

08/05/20 CAP 5615: Introduction to Neural Networks 5


Hamming Network – cont.
• Layer 2 – cont.
– Lateral inhibition
ai2 (t  1)  poslin (ai2 (t )   j i
a 2j (t ))
– This competition is called a winner-take-all
competition
• Since only one neuron will have a nonzero output

08/05/20 CAP 5615: Introduction to Neural Networks 6


Competitive Layer
• In a competitive layer, each neuron excites
itself and inhibits all the other neurons
– A transfer function that does the job of a
recurrent competitive layer
a  compet (n)
– It works by finding the neuron with the largest
net input and setting its output to 1 (In case of
ties, the neuron with lowest index). All other
outputs are set to 0

08/05/20 CAP 5615: Introduction to Neural Networks 7


Competitive Layer – cont.

08/05/20 CAP 5615: Introduction to Neural Networks 8


Competitive Learning
• A learning rule to train the weights in a
competitive network
– Instar rule
wij (q)  wij (q  1)  ai (q )( p j (q)  wij (q  1))
– In other words,
i w( q )  (1   ) i w(q  1)  p(q) for i  i *
i w( q )  i w( q  1) otherwise

08/05/20 CAP 5615: Introduction to Neural Networks 9


Competitive Learning Example

08/05/20 CAP 5615: Introduction to Neural Networks 10


Problems with Competitive Layers
• Choice of learning rate
– A learning rate near zero results in slow learning but stable
– A learning rate near one results in fast learning but oscillate
• Stability problem when clusters are close to each other
• Dead neuron
– A neuron whose initial weight vector is so far from any input
vectors that it never wins the competition
• The number of classes must be known

08/05/20 CAP 5615: Introduction to Neural Networks 11


Self-Organizing Feature Maps
• Self-Organizing Feature Maps – SOFM
– Motivated by competitive layers in biology
– The weight vectors for all neurons within a
certain neighborhood of the winning neuron are
updated

i w( q )  (1   ) i w(q  1)  p(q) for i  N i*


i w( q )  i w( q  1) otherwise

08/05/20 CAP 5615: Introduction to Neural Networks 12


Self-Organizing Feature Maps – cont.

08/05/20 CAP 5615: Introduction to Neural Networks 13


Self-Organizing Feature Maps – cont.

08/05/20 CAP 5615: Introduction to Neural Networks 14


Self-Organizing Feature Maps – cont.

08/05/20 CAP 5615: Introduction to Neural Networks 15


Improving Feature Maps
• Variable neighborhood sizes during training
• Variable learning rate
– The winning neuron uses a larger learning rate
• Alternative expression of network input
– Use directly computed distance instead of inner
product

08/05/20 CAP 5615: Introduction to Neural Networks 16


Learning Vector Quantization
• Learning Vector Quantization - LVQ

08/05/20 CAP 5615: Introduction to Neural Networks 17


Learning Vector Quantization – cont.
• The net input of the first layer is calculated
based on a distance measure
 w 1
p 
1
 
1  w1
p 
n   2

  
 w 1
 p 
 S 1

• The output of the first layer of the LVQ is
1 1
a  compnet (n )

08/05/20 CAP 5615: Introduction to Neural Networks 18


Learning Vector Quantization – cont.

• The second layer of LVQ network is to


combine subclasses into a single class
through W2 matrix
( wki2  1)  subclass i is part of class k
– By combining subclasses to form a class, LVQ
can create complex class boundaries

08/05/20 CAP 5615: Introduction to Neural Networks 19


Learning Vector Quantization – cont.

• LVQ Learning
– Competitive learning with supervision
• Supervised competitive learning
– Training sequence
{p1, t1}, {p2, t2}, ...., {pQ, tQ}
• where each target vector must contain a single 1 and
all others must be zero

08/05/20 CAP 5615: Introduction to Neural Networks 20


Learning Vector Quantization – cont.

• LVQ Learning – continued


– At each iteration, an input vector p is presented
to the network and the hidden neurons compete
and neuron i* wins the competition. The output
from the first layer is multiplied by W2 to get the
final output. Note that only one element (called
k*) in the output is 1 and all others are zero.

08/05/20 CAP 5615: Introduction to Neural Networks 21


Learning Vector Quantization – cont.

• LVQ Learning – continued


1 1 1 2
i* w ( q )  i* w ( q  1)   ( p ( q )  i* w ( q  1)), if a k*  tk*  1

1 1 1 2
i* w ( q )  i* w ( q  1)   ( p ( q )  i* w ( q  1)), if a k*  1  tk*  0

08/05/20 CAP 5615: Introduction to Neural Networks 22


Learning Vector Quantization – cont.

08/05/20 CAP 5615: Introduction to Neural Networks 23


Learning Vector Quantization – cont.

• LVQ2
– There are some limitations with LVQ
– LVQ2 is modified version of LVQ
• When the LVQ network correctly classifies an input
vector, the weights of one neuron are moved toward the
input vector
• If the input vector is incorrectly classified, the weights of
two neurons are updated, one weight vector is moved
away from the input as in LVQ and the other one (which
classifies the input correctly) is moved toward the input
vector
08/05/20 CAP 5615: Introduction to Neural Networks 24

You might also like