Support Vector Machines For Classification: A Seminar On Data Mining
Support Vector Machines For Classification: A Seminar On Data Mining
FOR CLASSIFICATION
Roshan P Koshy
210CS3058
National Institute of Technology,
Rourkela, Orissa
nitlogo.jpg
nitlogo.jpg
nitlogo.jpg
nitlogo.jpg
nitlogo.jpg
nitlogo.jpg
W.X + b = 0 (1)
w0 + w1 x1 + w2 x2 = 0 (2)
Thus, any point that lies above the separating hyperplane satisfies
w0 + w1 x1 + w2 x2 > 0 (3)
Similarly, any point that lies below the separating hyperplane satisfies
The weights can be adjusted so that the hyperplanes defining the ”sides”
of the margin can be written as
That is, any tuple that falls on or above H1 belongs to class +1, and any
tuple that falls on or below H2 belongs to class –1. Combining the two
inequalities of Equations and , we get
nitlogo.jpg
nitlogo.jpg
nitlogo.jpg
The next step is to rewrite the Equation (7) so that it becomes what is
known as a constrained quadratic optimization problem such as using
a Lagrangian formulation and then solving for the solution using
Karush-Kuhn-Tucker (KKT) conditions.For linearly separable data, the
support vectors are a subset of the actual training tuples.
Once we have found the support vectors and MMH , we have a trained
support vector machine.
The MMH can be rewritten as the decision boundary
l
X
d(XT ) = yi αi Xi XT + b0 (8)
i=1
nitlogo.jpg
nitlogo.jpg
Weka has two algorithm, SMO and LibSVM. Here We test with SMO
which is the linear kernel function.
nitlogo.jpg
nitlogo.jpg
nitlogo.jpg