unit 2_class
unit 2_class
oLinear Regression
oRegression Trees
oNon-Linear Regression
oBayesian Linear Regression
oPolynomial Regression
Logistic Regression
It is used for predicting the categorical dependent variable using a given set of
independent variables
the outcome must be a categorical or discrete value. It can be either Yes or No, 0
or 1, true or False, etc. but instead of giving the exact value as 0 and 1, it gives
the probabilistic values which lie between 0 and 1.
Logistic regression
oRandom Forest
oDecision Trees
oLogistic Regression
oSupport vector Machines
Confusion matrix
Classification Accuracy:
Accuracy = (TP + TN) / (TP + TN + FP + FN) = (3+4)/(3+4+2+1) = 0.70
Recall: Recall gives us an idea about when it’s actually yes, how often does it predict yes.
Recall = TP / (TP + FN) = 3/(3+1) = 0.75
Precision: Precision tells us about when it predicts yes, how often is it correct.
Precision = TP / (TP + FP) = 3/(3+2) = 0.60
Neural networks
Perceptron
Perceptron model is also treated as one of the best and simplest
types of Artificial Neural networks.
it is a supervised learning algorithm of binary classifiers. Hence,
we can consider it as a single-layer neural network with four
main parameters, i.e., input values, weights and Bias, net sum,
and an activation function.
ANN..
Each time a dataset passes through an algorithm, it is said to have completed an epoch.
Therefore, Epoch, in machine learning, refers to the one entire passing of training data
through the algorithm. It's a hyperparameter that determines the process of training
the machine learning model.
Perceptron – Activation function
Activation function will help to determine whether the neuron will fire or not.
Activation Function can be considered primarily as a step function.
Types of Activation functions:
oSign function
oStep function, and
oSigmoid function
Perceptron Model ( algorithm)
Step-2
In the second step, an activation function is applied with the above-mentioned
weighted sum, which gives us output either in binary form or a continuous value as
follows:
Y = f(∑wi*xi + b)
Perceptron training rule
Perceptron_training_rule (X, η)
initialize w (wi an initial (small) random value)
repeat
for each training instance (x, tx) ∈ X
compute the real output ox = Activation(Summation(w.x))
if (tx ≠ ox)
for each wi
wi wi + ∆𝑤𝑖
∆𝑤𝑖 η (tx - ox)xi
end for
end if
end for
until all the training instances in X are correctly classified
return w
Single vs Multilayer perceptron