Lecture-3 Learning in Feedforward Neural Networks (1)654
Lecture-3 Learning in Feedforward Neural Networks (1)654
Learning in Feedforward
Neural Networks
Learning - A trick to learn
n
xw i i θ
i =1
n
xw i i − 0
i =1
w1 x 1 + w 2 x 2 + … wn x n - > 0
w1 x1 + w2 x2 + … wn xn + * (-1) > 0
Learning - A trick to learn
Linearly Separable Functions
• Definition : Sets of points in 2-D space are linearly separable
if the sets can be separated by a straight line.
• Generalizing, a set of points in n-dimensional space are
linearly separable if there is a hyper plane of (n-1) dimensions
separates the sets.
• Example:
X1 X2 Class
0 1 -1
2 0 -1
1 1 +1
Linearly Separable Functions: Example
• The first step is to plot the data on a 2-D graph, and draw a
line which separates the positive from the negative data
points:
• This line has slope -1/2 and x2-intersect 5/4, so its equation is:
x2 = 5/4 - x1/2,
i.e. 2x1 + 4x2 - 5 = 0.
• Taking account of which side is positive, this corresponds to
these weights:
w0 = -5, w1 = 2, w2 = 4
Single-layer perceptron
• Inputs are typically in the range [0 , 1], where 0 is "off" and 1 is "on".
• Weights can be any real number (positive or negative).
What is the role of the bias in NN?
x1-x2= -1
x2 x1-x2=0
x1-x2= 1
x1
Bias as extra input
x0 = +1 w0
Activation
x1 W1
function
v Output
Input
x2 w2 (− ) y
Summing function
xm wm
m
weights v= w x j j
j =0
w0 = b
Bias of a Neuron: Example
• In the perceptron below, what will the output be
when the input is (0, 0)? What about inputs (0, 1), (1,
1) and (1, 0)? What if we change the bias weight to -
0.5?
Bias of a Neuron: Example
Perceptron's Learning Rule
• Choosing weights and threshold ɵ for the perceptron is not
easy! How to learn the weights and threshold from examples?
• We can use a learning algorithm that adjusts the weights and
threshold ɵ based on examples.
• Adjust the weights in such a way that the output of ANN is
consistent with class labels of training examples
– Error function:
e = y − f (u ( x ))
2
– SPAM filter.