0% found this document useful (0 votes)
5 views21 pages

Unit6 - 4 ANN

The document discusses various classification techniques including decision trees, Bayesian classification, artificial neural networks (ANN), and support vector machines. It outlines the training process for ANN using backpropagation, including weight adjustment and the importance of network topology. Additionally, it addresses evaluation metrics for classification accuracy and issues like overfitting and underfitting.

Uploaded by

nikhillamsal1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views21 pages

Unit6 - 4 ANN

The document discusses various classification techniques including decision trees, Bayesian classification, artificial neural networks (ANN), and support vector machines. It outlines the training process for ANN using backpropagation, including weight adjustment and the importance of network topology. Additionally, it addresses evaluation metrics for classification accuracy and issues like overfitting and underfitting.

Uploaded by

nikhillamsal1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

6.

1 Definition (Classification, Prediction), Learning and


testing of classification,
6.2 Classification by decision tree induction, ID3 as attribute
selection algorithm
6.3 Bayesian classification, Laplace smoothing
6.4 ANN: Classification by backpropagation
6.5 Rule based classifier (Decision tree to rules, rule
coverage and accuracy, efficient of rule simplification)
6.6 Support vector machine,
6.7 Evaluating accuracy (precision, recall, f-measure)
• Issues in classification, Overfitting and underfitting

• K- fold cross validation, Comparing two classifier


(McNemar’s test)
6.4 Artificial Neural Network Classifier

2
◼ It is set of connected i/o units in which each connection has
a weight associated with it.
◼ During the learning phase the network learns by adjusting
the weights so as to be able to predict the correct class
label of i/p labels.
◼ It also referred as connectionist learning due to connection
between units.
◼ It has long training time and poor interpretability but has
tolerance to noisy data.
◼ It can classify pattern on which they have not been trained.
◼ Well suited for continuous valued i/ps.
◼ It has parallel topology and processing 3
◼ Before training the network topology must:
I. Specifying number of i/p nodes/units: Depends upon
number of independent variable in data set.
II. Number of hidden layers: Generally only layer is
considered in most of the problem. Two layers can be
designed for complex problem. Number of nodes in
the hidden layer can be adjusted iteratively.
III. Number of output nodes/units: Depends upon
number of class labels of the data set.
IV. Learning rate: Can be adjusted iteratively.
V. Learning algorithm: Any appropriate learning
algorithm can be selected during training phase.
VI. Bias value: Can be adjusted iteratively. 4
◼ During training the connection weights must be
adjusted to fit i/p values with the o/p values.

5
Back propagation algorithm
◼ Step 1: Initialization: Set all the weights and thresholds

levels of the network to random numbers uniformly


distributed inside a small range.
◼ Step 2: Activation: Activate the back propagation neural

network by applying i/ps and desired o/ps.


i. Calculate the actual o/ps of the neurons in the hidden layers.
ii. Calculate the actual o/ps of the neurons in the o/p layers.
◼ Step 3: Weight training:

i. Updates weights in the back propagation network by propagating


backwards the errors associated with the o/p neurons.
ii. Calculate error gradient of o/p layer and hence of neurons in the
hidden layer.
◼ Step 4: Iteration: Increase iteration by repeating steps 2&3
until selected error criteria is satisfied.
6
7
NOTE: theta j is BIAS
= Sigmoid activation function

8
9
10
11
Solution:
Net input & output calculation

NOTE: Calculation of I & O was done as:

Similarly,
I5 = … (DO YOURSELF) O5 = … (DO YOURSELF)
NOTE: Calculation of E was done as:

13
Calculation Process:

NOTE: O3 = I3 = x3 since, O j=Ij


15
16
17
18
19
20

You might also like