Lect 7 Single Layer NN
Lect 7 Single Layer NN
Xn
McCulloh-Pitts Model
Binary – Threshold / Activation function
Hard limited
If Vk >=0 then Yk = 1
Otherwise 0
Yk
1
0
-∞ +∞
Vk
Example
I
1
O
1
2
2
3
2
1 3
Complete connection –bipartite graph
Specification of previous example
linearly Separable ?
Linearly Separable functions are those which
can differentiate +ve and –ve cases
I1 I2 O
I1 O
0 0 0
I2
0 1 0
1 0 0
1 1 1
I2
I1
This can linearly separable because it can have a plane
with –ve and +ve values
Implementation of AND gate using
McCulloh-Pitts Model
The output of the neuron K
can be written as I1 1
If Vk >= 2 then Yk = 1 V
1 k
Otherwise Yk = 0 I2
Input is
a) I1 = 1, I2 = 1 b) I1 = 1, I2 = 0
Vk=I1 * w1 + I2 * w2
Vk=I1 * w1 + I2 * w2
Vk = 2 then Yk = 1
Vk = 1 then Yk = 0
Example 2: let our function is logical OR
I1 I2 O
0 0 0
0 1 1
I1 O 1 0 1
I2 1 1 1
I2
I1
This can linearly separable because it can have a plane with
–ve and +ve values
Implementation of OR gate using McCulloh-
Pitts Model
The output of the neuron K can be written as
If Vk >= 2 then Yk = 1
Otherwise Yk = 0 I1 2
V
2
Input is I2 k
a) I1 = 1, I2 = 0
b) I1 = 1 , I2 = 1
Vk=I1 * w1 + I2 * w2
Vk = 2 then Yk = 1 Vk=I1 * w1 + I2 * w2
c) I1 = 0 , I2 = 0 Vk = 4 then Yk = 1
Vk=I1 * w1 + I2 * w2
Vk = 0 then Yk = 0
Example 3: let our function is logical XOR
I1 I2 O
0 0 0
0 1 1
I1 O 1 0 1
I2 1 1 0
I2
I1
This cannot linearly separable because it can’t have a
plane with –ve and +ve values
Comparison of linearly separable and non
linearly separable data
Single Layer Neural Network not able to perform
XOR