0% found this document useful (0 votes)
45 views

Support Vector Machine: Scenario 1

Support Vector Machine (SVM) is a supervised machine learning technique used for classification and regression. It works by finding a hyperplane that maximally separates data points of different classes with the maximum marginal distance. Support vectors are the data points closest to this hyperplane. SVM is effective for both linear and non-linear classification problems, and can handle multiple continuous and categorical variables. It is robust to outliers and works well in situations with more dimensions than data samples.

Uploaded by

Garvit Mehta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
45 views

Support Vector Machine: Scenario 1

Support Vector Machine (SVM) is a supervised machine learning technique used for classification and regression. It works by finding a hyperplane that maximally separates data points of different classes with the maximum marginal distance. Support vectors are the data points closest to this hyperplane. SVM is effective for both linear and non-linear classification problems, and can handle multiple continuous and categorical variables. It is robust to outliers and works well in situations with more dimensions than data samples.

Uploaded by

Garvit Mehta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Support Vector Machine

Support Vector Machine or commonly called as SVM is a powerful supervised learning


technique which finds its application in Classification and Regression. Preferably it is used to
deal with classification problems. Also, it can be used for both categorical and multiple
continuous variables. The main objective here is to segregate the data points with the help of
a Maximum Marginal Hyperplane (MMH) of multiple dimensions to classify data points into
different classes by separating them. Support vectors are nothing but the coordinates of the
observations or the data points which are at maximum distance from the hyperplane. Let us
take a look at how to implement the above discussed with the following scenarios.

Scenario 1: We can see in the below image that we need to classify a group of stars and circle
from each other. To do this we can image to take three hyperplanes namely A, B and C. To
classify two classes, we can have many hyperplanes but here we are taking only 3 hyperplanes,
also, one important thing that should be kept in mind that the plane should have maximum
margin i.e., maximum distance between two classes of the data points. So we can say that
hyperplane B fulfills are our requirement and is the best hyperplane to classify stars and circle.
Scenario 2: We can see in the below image that we need to classify a group of stars and circles
from each other. We can achieve this by taking example of three hyperplanes namely A, B and
C.

These farthest data points have


maximum distance from the
hyperplane C

We can see from the image that the hyperplane C is best hyperplane in this situation as it has
maximum distance from the farthest data points from both the classes (stars and circles).

Robust to outliers: One of the important features of SVM are that they are very effective in
dealing with the Outliers. Let us try to understand this with the help of following situation:

Fig 1 Fig 2
As we can see from Fig 1 that one star is lying in the region of the circles. This star is certainly a outlier,
so in this case SVM ignores the outlier and finds a hyperplane which has maximum margin from the data
points and thus classify the stars and circles into two correct groups as seen in Fig 2.

Applications:
1. SVM is used in handwriting recognition.
2. It is used in Facial Recognition.
3. It is used in Image classification.
Advantages :
1. It is very effective in high dimensional spaces.
2. It is memory efficient.
3. It is also efficient in cases where dimensions are greater than the number of samples.
Disadvantages:
1. It is not suitable for situations where the dataset is large.
2. It performs poor when the dataset has more noise.
3. If the number of features for data points is greater than the number of training data
samples, SVM performs poorly.

You might also like