Internethebbian1 090921232804 Phpapp02
Internethebbian1 090921232804 Phpapp02
counter.c
readme
ART1 Demo
Increasing
vigilance
Try
causes the
different
network to
patterns
be more
selective, to
introduce a
new
prototype
when the fit
is not good.
Hebbian Learning
Hebbs Postulate
When an axon of cell A is near enough to excite a cell B and
repeatedly or persistently takes part in firing it, some growth
process or metabolic change takes place in one or both cells such
that As efficiency, as one of the cells firing B, is increased.
D. O. Hebb, 1949
In other words,
B when a weight
contributes to
firing a neuron,
the weight is
increased. (If
the neuron
A doesnt fire,
then it is not).
B
A
B
A
Colloquial Corollaries
Unsupervised
Weights are strengthened by the
actual response to a stimulus
Supervised
Weights are strengthened by the
desired response
Unsupervised Hebbian
Learning
input output
1 stimulus 1 response
p = a =
0 no stimulus 0 no response
Banana Associator
Didnt
Pavlov
anticipate
this?
can be toggled
Unsupervised Hebb Rule
w ij q = w ij q 1 + ai qp j q
input
actual response
Vector Form:
W q = W q 1 + a q p T q
Training Sequence:
p1 p 2 pQ
Learning Banana Smell
Initial Weights:
0
w = 1 w 0 = 0
unconditioned
(shape) Training Sequence:
0 0
p 1 = 0 p 1 = 1 p 2 = 1 p 2 = 1
conditioned
(smell) =1
w q = w q 1 + aq pq
w 1 = w 0 + a 1p 1 = 0 + 0 1 = 0
Example
Second Iteration (sight works, smell present):
0 0
a 2 = hardlim w p 2 + w 1 p 2 0.5
= hardlim 1 1 + 0 1 0.5 = 1 (banana)
w 2 = w 1 + a 2p 2 = 0 + 1 1 = 1
w 3 = w 2 + a 3p 3 = 1 + 1 1 = 2
W q = 1 W q 1 + a q pT q
w 1 = w 0 + a 1p 1 0.1w 0 = 0 + 0 1 0.1 0 = 0
w 2 = w 1 + a 2p 2 0.1w 1 = 0 + 1 1 0.1 0 = 1
Example
Third Iteration (sight fails, smell present):
0 0
a 3 = hardlim w p 3 + w 2 p 3 0.5
= hardlim 1 0 + 1 1 0.5 = 1 (banana)
no decay max
w ij = ---
larger decay
Problem of Hebb with Decay
Associations will be lost if stimuli are not occasionally
presented.
3
If ai = 0, then
2
wij q = 1 w ij q 1
If = 0, this becomes 1
wi j q = 0.9wi j q 1
0
0 10 20 30
or
Tp p cos b
1w = 1w
For normalized vectors, the largest inner product occurs when the
angle between the weight vector and the input vector is zero -- the
input vector is equal to the weight vector.
1w
w ij q = w ij q 1 + a i q p j q a i q w ij q 1
or
w ij q = wij q 1 + ai q pj q wi j q 1
Vector Form:
iwq = iw q 1 + ai q p q iwq 1
Graphical Representation
For the case where the instar is active (ai = 1):
iwq = iw q 1 + p q iw q 1
or
iw q = 1 iw q 1 + p q
input
vector
weight DW
vector
Outstar (Recall Network)
Outstar Operation
Suppose we want the outstar to recall a certain pattern a*
whenever the input p = 1 is presented to the network. Let
W = a
Then, when p = 1
a = satlins W p = s atli ns a 1 = a
For the outstar rule we make the weight decay term proportional to
the input of the network.
wij q = wi j q 1 + ai qp j q p j qw ij q 1
Vector Form:
w j q = w j q 1 + a q wj q 1 p j q
Example - Pineapple Recall
Definitions
a = satl ins W0 p0 + W p
100
0
W = 010
001
shape 1
0 pi neap ple
p = tex ture p = 1
w eight 1
=1
0 0 0
a 1 = s atli ns 0 + 0 1 = 0 (no response)
0 0 0
0 0 0 0
w 1 1 = w1 0 + a 1 w 1 0 p 1 = 0 + 0 0 1 = 0
0 0 0 0
Convergence
1 0 1
a 2 = satlins 1 + 0 1 = 1 (measurements given)
1 0 1
0 1 0 1
w 1 2 = w1 1 + a 2 w1 1 p 2 = 0 + 1 0 1 = 1
0 1 0 1
0 1 1
a 3 = s atli ns 0 + 1 1 = 1 (measurements recalled)
0 1 1
1 1 1 1
w 1 3 = w1 2 + a 2 w1 2 p 2 = 1 + 1 1 1 = 1
1 1 1 1
Supervised
Hebbian Learning
Linear Associator
R
a = Wp ai = w ij p j
j =1
Training Set:
{ p1, t 1} {p2, t 2} {pQ , t Q}
Hebb Rule
w new ij + f i ai qgj p jq
= wol d
ij
Presynaptic Signal
Postsynaptic Signal
actual output
Simplified Form:
winew ij + aiq p jq
= w ol d
j
input pattern
Supervised Form:
winew
j = w ol d
ij + t iq p jq
Matrix Form:
T
p1 P = p1 p2 pQ
T
W = t1 t2 tQ p 2 = TP T
T
pQ T = t1 t2 tQ
Performance Analysis
Q T
Q
a = Wp k = t q p q p k =
T
t q p q p k
q = 1 q= 1
= 0 qk
q q pk
T
a = Wpk = t k + t p
q k
Error term
Example
Banana Apple Normalized Prototype Patterns
1 1 0.5774 0.5774
p1 = 1 p2 = 1 p 1 = 0.5774 t1 = 1 p 2 = 0.5774 t 2 = 1
1 1 0.5774 0.5774
Tests:
0.5774
Banana Wp 1 = 1.1548 0 0 0.5774 = 0.6668
0.5774
0.5774
Apple Wp 2 = 0 1.1548 0 0.5774 = 0.6668
0.5774
Pseudoinverse Rule - (1)
Performance Index: Wpq = tq q = 1 2 Q
2
F W = ||t q Wpq || Mean-squared error
q= 1
Matrix Form: WP = T
T = t1 t2 tQ P = p1 p2 pQ
2 2
F W = ||T WP|| = ||E||
2 2
|| E || = eij
i j
Pseudoinverse Rule - (2)
WP = T
Minimize:
2 2
F W = ||T WP|| = ||E||
1
P + = P TP P T
Relationship to the Hebb Rule
Hebb Rule
T
W = TP
Pseudoinverse Rule
W = TP +
1
P + = P TP P T
PT P = I
+ T 1 T T
P = P P P = P
Example
+
1
1
+ 1 1
1
p =
1 1 t = 1 2
p =
1 2 t = 1 W = TP = 1 1 1 1
1 1 1 1
1
+ T 1 T 1 1 1 = 0.5 0.25 0.25
P = P P P = 3 1
13 1 1 1 0.5 0.25 0.25
1 1
Wp 1 = 1 0 0 1 = 1 Wp 2 = 1 0 0 1 = 1
1 1
Autoassociative Memory
T
p1 = 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
T T T
W = p1p1 + p2p2 + p3p3
Tests
50% Occluded
67% Occluded
target
actual
Unsupervised: W
new
= W
old
+ aqpq
T