37_Perceptron2
37_Perceptron2
Perceptron
DR. ABHISHEK SARKAR
ASSISTANT PROFESSOR
MECHANICAL ENGG., BITS
Hebb Network
30
Hebb Network
Example
31 -1 -1 1 -1
Hebb Network
Example
• So
w1(new) = w1 (old) +x1y = 0 + 1 x 1 = 1
w2(new) = w2 (old) +x2y = 0 + 1 x 1 = 1
Inputs Targets
b(new) = b(old) + y = 0 + 1 = 1
x1 x2 b y
• These updated weights are used as
the initial weights when the second 1 1 1 1
input pattern is presented. 1 -1 1 -1
-1 1 1 -1
32 -1 -1 1 -1
Hebb Network
Example
34
Perceptron Architecture
• Therefore, if the inner product of the ith row of the weight matrix
with the input vector is greater than or equal to -bi, the output will
be 1, otherwise the output will be 0.
45 • Each neuron in the network divides the input space into two regions.
Single-Neuron Perceptron
46
Multiple-Neuron Perceptron
51
Unified Learning Rule
57
Training Multiple-Neuron Perceptrons
and
58
Training Multiple-Neuron Perceptrons
• We are using 0 as the target output for the orange pattern, p1,
instead of -1. This is because we are using the hardlim transfer
function, instead of hardlims.)
59
Training Multiple-Neuron Perceptrons
• Suppose that here we start with the initial weight matrix and bias:
• The first step is to apply the first input vector, p1, to the network:
60
Training Multiple-Neuron Perceptrons
61
Training Multiple-Neuron Perceptrons
• Error
• Weight and bias update
62
Training Multiple-Neuron Perceptrons
• The third iteration begins again with the first input vector:
63 Continue ...
Convergence
64
Limitations
65
Limitations
66