3-e63fffeef6-artificial-intelligence_unit_2
3-e63fffeef6-artificial-intelligence_unit_2
1
Perceptron Networks
Perceptron networks come under single-layer feed-forward networks and are also called simple perceptrons.
The key points to be noted in a perceptron network are:
1. The perceptron network consists of three units, namely, sensory unit (input unit), associator unit (hidden
unit), response unit (output unit).
2. The sensory units are connected to associator units with fixed weights having values 1, 0 or -1, which are
assigned at random.
3. The binary activation function is used in sensory unit and associator unit.
4. The response unit has an activation of 1, 0 or -1. The binary step with fixed threshold θ is used as activation
for associator. The output signals £hat are sent from the associator unit to the response unit are only binary.
5. The output of the perceptron network is given by
Perceptron Networks
6. The perceptron learning rule is used in the weight updation between the associator unit and the response
unit. For each training input, the net will calculate the response and it will determine whether or not an
error has occurred.
7. The error calculation is based on the comparison of the values of targets with those of the calculated outputs.
8. The weights on the connections from the units that send the nonzero signal will get adjusted suitably.
9. The weights will be adjusted on the basis of the learning rule if an error has occurred for a particular training
pattern, i.e.,
If no error occurs, there is no weight updation and hence the training process may be stopped. In the above
equations, the target value "t" is +I or-1 and α is the learning rate. In general, these learning rules begin with
an initial guess at the weight values and then successive adjustments are made on the basis of the evaluation
of an objective function. Eventually, the learning rules reach a near-optimal or optimal solution in a finite
number of steps.
Perceptron Networks
Set α(0 or 1)
For each
no
S:t
yes
Activate input units
Xi=si
If no
Y!=t
yes
Wi(new=wi(old)+ αtxi Wi(new)=wi(old)
b(new)=b(old)+ αt b(new)=b(old)
If
weight
changes
yes no
STOP
Implement AND function using perceptron networks for bipolar inputs and targets.
3.12 Solved Problems
Implement AND function using perceptron networks for bipolar inputs and targets.
Implement AND function using perceptron networks for bipolar inputs and targets.
Implement AND function using perceptron networks for bipolar inputs and targets.
Implement AND function using perceptron networks for bipolar inputs and targets.
Implement AND function using perceptron networks for bipolar inputs and targets.
Implement OR function using perceptron networks for bipolar inputs and targets.
3.12 Solved Problems
Implement OR function using perceptron networks for bipolar inputs and targets.
Implement OR function using perceptron networks for bipolar inputs and targets.
Implement ANDNOT function using perceptron networks for bipolar inputs and
targets. 3.12 Solved Problems
Implement ANDNOT function using perceptron networks for bipolar inputs and
targets.
Implement ANDNOT function using perceptron networks for bipolar inputs and
targets.
Implement ANDNOT function using perceptron networks for bipolar inputs and
targets.
Implement ANDNOT function using perceptron networks for bipolar inputs and
targets.
Implement ANDNOT function using perceptron networks for bipolar inputs and
targets.
Classify the two-dimensional input pattern shown in Figure using perceptron network. The symbol
“*” indicates the data representation to be + 1 and "•" indicates data to be -1. The patterns are I-
F. For pattern I, the target is+ 1, and for F, the target is -1.
3.12 Solved Problems
Classify the two-dimensional input pattern shown in Figure using perceptron network. The symbol
“*” indicates the data representation to be + 1 and "•" indicates data to be -1. The patterns are I-
F. For pattern I, the target is+ 1, and for F, the target is -1.
Classify the two-dimensional input pattern shown in Figure using perceptron network. The symbol
“*” indicates the data representation to be + 1 and "•" indicates data to be -1. The patterns are I-
F. For pattern I, the target is+ 1, and for F, the target is -1.
Classify the two-dimensional input pattern shown in Figure using perceptron network. The symbol
“*” indicates the data representation to be + 1 and "•" indicates data to be -1. The patterns are I-
F. For pattern I, the target is+ 1, and for F, the target is -1.
Adaptive Linear Neuron (Adaline)
The units with linear activation function are called linear units.
A network with a single linear unit is called an Adaline (adaptive linear
neuron).
That is, in an Adaline, the input-output relationship is linear.
Adaline uses bipolar activation for its input signals and its target output.
The weights between the input and the output are adjustable. The bias in
Adaline acts like an adjustable weight, whose connection is from a unit
with activations being always 1.
Adaline is a net which has only one output unit.
The Adaline network may be trained using delta rule. The delta rule may
also be called as least mean square (LMS) rule or Widrow-Hoff rule.
This learning rule is found to minimize the mean-squared error between
the activation and the target value.
Delta Rule for Single Output Unit
Delta Rule for Single Output Unit
Adaline model
START
For
each
S:t
Weight updation
ωi(new) = ωi(old)+ α(t-yin)xi
b(new) + b(old) + α(t-yin)
Calculate error
Ei = Σ (t-yin)2
If
Ei=Es
STOP
Implement OR function with bipolar inputs and targets using Adaline network. 3.12 Solved Problems
Implement OR function with bipolar inputs and targets using Adaline network.
Implement OR function with bipolar inputs and targets using Adaline network.
Implement OR function with bipolar inputs and targets using Adaline network.
Implement OR function with bipolar inputs and targets using Adaline network.
Implement OR function with bipolar inputs and targets using Adaline network.
Use Adaline network to train ANDNOT function with bipolar inputs and targets. Perform 2 epochs of
training. 3.12 Solved Problems
Use Adaline network to train ANDNOT function with bipolar inputs and targets. Perform 2 epochs of
training.
Use Adaline network to train ANDNOT function with bipolar inputs and targets. Perform 2 epochs of
training.
Use Adaline network to train ANDNOT function with bipolar inputs and targets. Perform 2 epochs of
training.
Use Adaline network to train ANDNOT function with bipolar inputs and targets. Perform 2 epochs of
training.
Use Adaline network to train ANDNOT function with bipolar inputs and targets. Perform 2 epochs of
training.
Use Adaline network to train ANDNOT function with bipolar inputs and targets. Perform 2 epochs of
training.
APPLICATION
COMPARISION WITH PERCEPTRON
Multiple Adaptive Linear Neurons (Madaline)
If
t=
+1
Update weight on unit xj
Where net input is closet to zero
b j(new) = bj(old)+ α(1-Zinj)
Wj(new) = Wj(old)+ α(1-Zinj)xi
Update weight on unit Zk
Which has positive net input
bk(new) = bk(old)+ α(1-Zinj)
Wik(new) = Wik(old)+ α(1-Zinj)xi
If no weight
changes or
C specified
number of
epochs
STOP
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network. 3.12 Solved Problems
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Using Madaline network, implement XOR function with bipolar inputs and targets. Assume the
required parameters for training of the network.
Back-Propagation Network
The back-propagation learning algorithm is one of the most important developments in neural
networks (Bryson and Ho, 1969; Werbos, 1974; Lecun, 1985; Parker, 1985; Rumelhart, 1986).
This network has reawakened the scientific and engineering community to the modeling and
processing of numerous quantitative phenomena using neural networks.
This learning algorithm is applied to multilayer feed-forward networks consisting of processing
elements with continuous differentiable activation functions.
The networks associated with back-propagation learning algorithm are also called back-
propagation networks (BPNs).
For a given set of training input-output pair, this algorithm provides a procedure for changing the
weights in a BPN to classify the given input patterns correctly.
The basic concept for this weight update algorithm is simply the gradient-descent method as used
in the case of simple perceptron networks with differentiable units.
This is a method where the error is propagated back to the hidden unit.
The aim of the neural network is to train the net to achieve a balance between the net’s ability to
respond (memorization) and its ability to give reasonable responses to the input that is similar but
not identical to the one that is used in training (generalization).
Back-Propagation Network
The back-propagation algorithm is different from other networks in
respect to the process by which the weights are calculated during the
learning period of the network.
The training of the BPN is done in three stages – the feed-forward of the
input training pattern, the calculation and back-propagation of the error,
and updation of weights.
The testing of the BPN involves the computation of feed-forward phase
only.
There can be more than one hidden layer (more beneficial ) but one
hidden layer is sufficient.
Even though the training is very slow, once the network is trained it can
produce its outputs very rapidly.
MERITS OF BACK PROPAGATION NETWORK
Back-Propagation Network
Architecture
Figure: Network.
Using back-propagation network, find the new weights for the net shown in Figure. It is presented
with the input pattern [0, 1] and the target output is 1. Use a learning rate α = 0.25 and binary
sigmoidal activation function.
Figure: Network.
Using back-propagation network, find the new weights for the net shown in Figure. It is presented
with the input pattern [0, 1] and the target output is 1. Use a learning rate α = 0.25 and binary
sigmoidal activation function.
Figure: Network.
Using back-propagation network, find the new weights for the net shown in Figure. It is presented
with the input pattern [0, 1] and the target output is 1. Use a learning rate α = 0.25 and binary
sigmoidal activation function.
Figure: Network.
Using back-propagation network, find the new weights for the net shown in Figure. It is presented
with the input pattern [0, 1] and the target output is 1. Use a learning rate α = 0.25 and binary
sigmoidal activation function.
Figure: Network.
Using back-propagation network, find the new weights for the net shown in Figure. It is presented
with the input pattern [0, 1] and the target output is 1. Use a learning rate α = 0.25 and binary
sigmoidal activation function.
Figure: Network.
Using back-propagation network, find the new weights for the net shown in Figure. It is presented
with the input pattern [0, 1] and the target output is 1. Use a learning rate α = 0.25 and binary
sigmoidal activation function.
Figure: Network.
Find the new weights, using back-propagation network for the network shown in Figure. The
network is presented with the input pattern [-1, 1] and the target output is +1. Use a learning rate
of α = 0.25 and bipolar sigmoidal activation function. 3.12 Solved Problems
Figure: Network.
Find the new weights, using back-propagation network for the network shown in Figure. The
network is presented with the input pattern [-1, 1] and the target output is +1. Use a learning rate
of α = 0.25 and bipolar sigmoidal activation function.
Figure: Network.
Find the new weights, using back-propagation network for the network shown in Figure. The
network is presented with the input pattern [-1, 1] and the target output is +1. Use a learning rate
of α = 0.25 and bipolar sigmoidal activation function.
Figure: Network.
Find the new weights, using back-propagation network for the network shown in Figure. The
network is presented with the input pattern [-1, 1] and the target output is +1. Use a learning rate
of α = 0.25 and bipolar sigmoidal activation function.
Figure: Network.
Find the new weights, using back-propagation network for the network shown in Figure. The
network is presented with the input pattern [-1, 1] and the target output is +1. Use a learning rate
of α = 0.25 and bipolar sigmoidal activation function.
Figure: Network.
Find the new weights, using back-propagation network for the network shown in Figure. The
network is presented with the input pattern [-1, 1] and the target output is +1. Use a learning rate
of α = 0.25 and bipolar sigmoidal activation function.
Figure: Network.
Find the new weights, using back-propagation network for the network shown in Figure. The
network is presented with the input pattern [-1, 1] and the target output is +1. Use a learning rate
of α = 0.25 and bipolar sigmoidal activation function.
Figure: Network.
Reference
Dr. S.N. Sivanandam and Dr. S.N. Deepa, “Principles of
Soft Computing,” 2nd Edition, ISBN: 978-81-265-2741-0,
www.wileyindia.com, Publisher: Wiley India Pvt. Ltd.
Linkfor the recorded lectures of Artificial Intelligence
(EE52108).
https://drive.google.com/drive/folders/1UC1vj2LeN1LtsT
_7TMKgpeKCo_PnkLsF?usp=sharing
84