0% found this document useful (0 votes)
7 views2 pages

MLT OPPE Formula Guide

Uploaded by

Adavya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views2 pages

MLT OPPE Formula Guide

Uploaded by

Adavya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

MLT OPPE Formula Guide

Regression
Normal Equation Linear Regression
T −1
w = (XX ) XY

^ = wT X
y

Gradient Descent
t+1 t t T t
w = w − η (2XX w − 2XY )

^ = wT X
y

Stochastic Gradient Descent


Same formula as Gradient Descent, except that X and y are random samples of the
whole dataset.

Kernel Regression
T degree
K = (X X + 1)

−1
α = K y

^ = KTα
y

Ridge Regression
T −1
w ridge = (XX + λI ) XY

Error
1 n 2
RM SE = √ ∑ (y − y)
^
n i=1

SVM
Objective Function
min f (α)
α>=0

Where

1
∗ T T
f (α) = α = ⋅ α Qα − α 1
2

T T
Q = Y X XY

Equation of Decision Boundary


−w 0
y = x db
w1

Supporting Hyperplanes
1 w0
y = − ∗ x
w1 w1

And

1 w0
y = − − ∗ x
w1 w1

For soft margin SVM, make the Bounds of the optimization function as (0, C).

Perceptron
If sign(w T x i )! = yi for a data point:

w = w + xi yi

You might also like