0% found this document useful (0 votes)
54 views

Data Mining Lecture 10B: Classification

The document discusses several machine learning classification algorithms: 1. K-nearest neighbors classifier assigns labels based on the k closest training examples in the feature space. 2. Naive Bayes classifiers use Bayes' theorem to calculate the probability that an instance belongs to a particular class. 3. Logistic regression models the probability of each class as a function of the features using a logistic function. 4. Support vector machines find the optimal separating hyperplane that maximizes the margin between the classes in the training data. Kernels can be used to handle non-linear decision boundaries.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
54 views

Data Mining Lecture 10B: Classification

The document discusses several machine learning classification algorithms: 1. K-nearest neighbors classifier assigns labels based on the k closest training examples in the feature space. 2. Naive Bayes classifiers use Bayes' theorem to calculate the probability that an instance belongs to a particular class. 3. Logistic regression models the probability of each class as a function of the features using a logistic function. 4. Support vector machines find the optimal separating hyperplane that maximizes the margin between the classes in the training data. Kernels can be used to handle non-linear decision boundaries.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 62

DATA MINING

LECTURE 10B
Classification
k-nearest neighbor classifier
Naïve Bayes
Logistic Regression
Support Vector Machines
NEAREST NEIGHBOR
CLASSIFICATION
Illustrating Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Learning
1 Yes Large 125K No
algorithm
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ? Deduction


14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Instance-Based Classifiers
Set of Stored Cases • Store the training records

……...
• Use training records to
Atr1 AtrN Class predict the class label of
A unseen cases
B
B
Unseen Case
C
A Atr1 ……... AtrN

C
B
Instance Based Classifiers
• Examples:
• Rote-learner
• Memorizes entire training data and performs classification only
if attributes of record match one of the training examples exactly

• Nearest neighbor classifier


• Uses k “closest” points (nearest neighbors) for performing
classification
Nearest Neighbor Classifiers
• Basic idea:
• “If it walks like a duck, quacks like a duck, then it’s
probably a duck”

Compute
Distance Test
Record

Training Choose k of the


Records “nearest” records
Nearest-Neighbor Classifiers
Unknown record  Requires three things
– The set of stored records
– Distance Metric to compute
distance between records
– The value of k, the number of
nearest neighbors to retrieve

 To classify an unknown record:


1. Compute distance to other
training records
2. Identify k nearest neighbors
3. Use class labels of nearest
neighbors to determine the
class label of unknown
record (e.g., by taking
majority vote)
Definition of Nearest Neighbor

X X X

(a) 1-nearest neighbor (b) 2-nearest neighbor (c) 3-nearest neighbor

K-nearest neighbors of a record x are data points


that have the k smallest distance to x
1 nearest-neighbor
Voronoi Diagram defines the classification boundary

The area takes the


class of the green
point
Nearest Neighbor Classification
• Compute distance between two points:
• Euclidean distance

d ( p, q )   ( pi
i
q )
i
2

• Determine the class from nearest neighbor list


• take the majority vote of class labels among the k-
nearest neighbors
• Weigh the vote according to distance
• weight factor, w = 1/d2
Nearest Neighbor Classification…
• Choosing the value of k:
• If k is too small, sensitive to noise points
• If k is too large, neighborhood may include points from
other classes

X
Nearest Neighbor Classification…
• Scaling issues
• Attributes may have to be scaled to prevent distance
measures from being dominated by one of the attributes
• Example:
• height of a person may vary from 1.5m to 1.8m
• weight of a person may vary from 90lb to 300lb
• income of a person may vary from $10K to $1M
Nearest Neighbor Classification…
• Problem with Euclidean measure:
• High dimensional data
• curse of dimensionality
• Can produce counter-intuitive results

111111111110 100000000000
vs
011111111111 000000000001
d = 1.4142 d = 1.4142

 Solution: Normalize the vectors to unit length


Nearest neighbor Classification…
• k-NN classifiers are lazy learners
• It does not build models explicitly
• Unlike eager learners such as decision trees
• Classifying unknown records are relatively
expensive
• Naïve algorithm: O(n)
• Need for structures to retrieve nearest neighbors fast.
• The Nearest Neighbor Search problem.
Nearest Neighbor Search
• Two-dimensional kd-trees
• A data structure for answering nearest neighbor queries
in R2

• kd-tree construction algorithm


• Select the x or y dimension (alternating between the two)
• Partition the space into two with a line passing from the
median point
• Repeat recursively in the two partitions as long as there
are enough points
Nearest Neighbor Search

2-dimensional kd-trees
Nearest Neighbor Search

2-dimensional kd-trees
Nearest Neighbor Search

2-dimensional kd-trees
Nearest Neighbor Search

2-dimensional kd-trees
Nearest Neighbor Search

2-dimensional kd-trees
Nearest Neighbor Search
2-dimensional kd-trees
Nearest Neighbor Search
2-dimensional kd-trees
region(u) – all the black points in the subtree of u
Nearest Neighbor Search
2-dimensional kd-trees

 A binary tree:
 Size O(n)
 Depth O(logn)
 Construction time O(nlogn)
 Query time: worst case O(n), but for many cases O(logn)

Generalizes to d dimensions

 Example of Binary Space Partitioning


SUPPORT VECTOR
MACHINES
Support Vector Machines

• Find a linear hyperplane (decision boundary) that will separate the data
Support Vector Machines
B1

• One Possible Solution


Support Vector Machines

B2

• Another possible solution


Support Vector Machines

B2

• Other possible solutions


Support Vector Machines
B1

B2

• Which one is better? B1 or B2?


• How do you define better?
Support Vector Machines
B1

B2

b21
b22

margin
b11

b12

• Find hyperplane maximizes the margin => B1 is better than B2


Support Vector Machines
B1

 
w x b  0
 
  w  x  b  1
w  x  b  1

b11

  b12
 1 if w  x  b  1 2
f ( x)     Margin  
  1 if w  x  b  1 || w ||
Support Vector Machines
2
• We want to maximize: Margin   2
|| w ||
 2
• Which is equivalent to minimizing: L( w) 
|| w ||
2
• But subjected to the following constraints:

  if
if

• This is a constrained optimization problem


• Numerical approaches to solve it (e.g., quadratic programming)
Support Vector Machines
• What if the problem is not linearly separable?
Support Vector Machines
• What if the problem is not linearly separable?

  𝜉𝑖
‖𝑤‖
Support Vector Machines
• What if the problem is not linearly separable?
• Introduce slack variables
• Need to minimize:
 2
|| w ||  N k
L( w)   C   i 
2  i 1 
• Subject to:
 
if
if
Nonlinear Support Vector Machines
• What if decision boundary is not linear?
Nonlinear Support Vector Machines
• Transform data into higher dimensional space

Use the Kernel Trick


LOGISTIC REGRESSION
Classification via regression
• Instead of predicting the class of an record we
want to predict the probability of the class given
the record
• The problem of predicting continuous values is
called regression problem
• General approach: find a continuous function that
models the continuous points.
Example: Linear regression
•  Given a dataset of the
form find a linear
function that given the
vector predicts the
value as
• Find a vector of weights
that minimizes the sum of
square errors

• Several techniques for


solving the problem.
Classification via regression
• Assume a linear classification boundary   ⋅ 𝑥 >0
𝑤
  the positive class the bigger
For
the value of , the further the point
is from the classification boundary,
the higher our certainty for the
membership to the positive class   ⋅ 𝑥 =0
𝑤
• Define as an increasing
function of

 
For the negative class the smaller
the value of , the further the point
is from the classification boundary,
the higher our certainty for the
membership to the negative class
• Define as a decreasing function
of
  ⋅ 𝑥 <0
𝑤
Logistic Regression
The logistic function
  1
𝑓 ( 𝑡 )= −𝑡
1 +𝑒
  1
𝑃 ( 𝐶 +¿ ¿ 𝑥 )= −𝑤 ⋅ 𝑥
1+𝑒
  𝑒− 𝑤 ⋅ 𝑥
𝑃 ( 𝐶 −|𝑥 )= −𝑤 ⋅ 𝑥
1+𝑒
  𝑃 ( 𝐶 +¿ ¿ 𝑥 )
log =𝑤 ⋅ 𝑥
𝑃 ( 𝐶 −|𝑥 )  LogisticRegression: Find the
vectorthat maximizes the
Linear regression on the log-odds ratio
probability of the observed data
Logistic Regression
• Produces a probability estimate for the class
membership which is often very useful.
• The weights can be useful for understanding the
feature importance.
• Works for relatively large datasets
• Fast to apply.
NAÏVE BAYES CLASSIFIER
Bayes Classifier
• A probabilistic framework for solving classification
problems
• A, C random variables
• Joint probability: Pr(A=a,C=c)
• Conditional probability: Pr(C=c | A=a)
• Relationship between joint and conditional
probability distributions
Pr(C , A)  Pr(C | A)  Pr( A)  Pr( A | C )  Pr(C )

P ( A | C ) P (C )
• Bayes Theorem: P (C | A) 
P ( A)
Bayesian Classifiers
• Consider each
a l attribute
a l
u s and class label as random
c c i i o
variables or o r u
te
g e g tin s s
ca c at c on cl
a
Evade C
Tid Refund Marital Taxable Event space: {Yes, No}
Status Income Evade
P(C) = (0.3, 0.7)
1 Yes Single 125K No
Refund A1
2 No Married 100K No
Event space: {Yes, No}
3 No Single 70K No P(A1) = (0.3,0.7)
4 Yes Married 120K No
5 No Divorced 95K Yes Martial Status A2
No
Event space: {Single, Married, Divorced}
6 No Married 60K
P(A2) = (0.4,0.4,0.2)
7 Yes Divorced 220K No
8 No Single 85K Yes
Taxable Income A3
9 No Married 75K No Event space: R
10 No Single 90K Yes P(A3) ~ Normal(,)
10
Bayesian Classifiers
• Given a record X over attributes (A1, A2,…,An)
• E.g., X = (‘Yes’, ‘Single’, 125K)

• The goal is to predict class C


• Specifically, we want to find the value c of C that maximizes
P(C=c| X)
• Maximum Aposteriori Probability estimate

• Can we estimate P(C| X) directly from data?


• This means that we estimate the probability for all possible
values of the class variable.
Bayesian Classifiers
• Approach:
• compute the posterior probability P(C | A1, A2, …, An) for all
values of C using the Bayes theorem
P ( A A  A | C ) P (C )
P (C | A A  A )  1 2 n

P( A A  A )
1 2 n

1 2 n

• Choose value of C that maximizes


P(C | A1, A2, …, An)

• Equivalent to choosing value of C that maximizes


P(A1, A2, …, An|C) P(C)

• How to estimate P(A1, A2, …, An | C )?


Naïve Bayes Classifier
•  Assume independence among attributes Ai when class is
given:

• We can estimate P(Ai| C) for all values of Ai and C.

• New point X is classified to class c if

is maximum over all possible values of C.


How to Estimate
l l
Probabilities from Data?
a a u s
r ic r ic o
o o u
eg eg it n s s
at at n
cl
a • Class
  Prior Probability:
c c co
Tid Refund Marital Taxable
Status Income Evade
e.g., P(C = No) = 7/10,
1 Yes Single 125K No
P(C = Yes) = 3/10
2 No Married 100K No
3 No Single 70K No
• For discrete attributes:
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No where is number of instances
7 Yes Divorced 220K No having attribute and belongs to
8 No Single 85K Yes class
9 No Married 75K No • Examples:
10 No Single 90K Yes
10
P(Status=Married|No) = 4/7
P(Refund=Yes|Yes)=0
How to Estimate Probabilities from Data?

• For continuous attributes:


• Discretize the range into bins
• one ordinal attribute per bin
• violates independence assumption
• Two-way split: (A < v) or (A > v)
• choose only one of the two splits as new attribute
• Probability density estimation:
• Assume attribute follows a normal distribution
• Use data to estimate parameters of distribution
(i.e., mean  and standard deviation )
• Once probability distribution is known, we can use it to estimate
the conditional probability P(Ai|c)
l l s
How togoEstimate
go it n
uProbabilities from Data?
s
ric
a
ric
a
o u

t e t e
on las
ca ca c c
Tid Refund Marital Taxable
Evade
• Normal distribution:
Status Income
( a   ij ) 2

1 Yes Single 125K No 1 2 ij2
P ( Ai  a | c j )  e
2 No Married 100K No 2 ij2
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes • One for each (ai,ci) pair
6 No Married 60K No
7 Yes Divorced 220K No
• For (Income, Class=No):
8 No Single 85K Yes
• If Class=No
9 No Married 75K No
10 No Single 90K Yes
• sample mean = 110
• sample variance = 2975
10

1 
( 120110 ) 2

P( Income  120 | No)  e 2 ( 2975)


 0.0072
2 (54.54)
Example of Naïve Bayes Classifier
• Creating a Naïve Bayes Classifier, essentially
means to compute counts:
Total number of records: N = 10

Class No: Class Yes:


Number of records: 7 Number of records: 3
Attribute Refund: Attribute Refund:
Yes: 3 Yes: 0
No: 4 No: 3
Attribute Marital Status: Attribute Marital Status:
Single: 2 Single: 2
Divorced: 1 Divorced: 1
Married: 4 Married: 0
Attribute Income: Attribute Income:
mean: 110 mean: 90
variance: 2975 variance: 25
Example of Naïve Bayes Classifier
Given a Test Record:
X  ( Refund  No, Married, Income  120K)
naive Bayes Classifier:
P(Refund=Yes|No) = 3/7  P(X|Class=No) = P(Refund=No|Class=No)
P(Refund=No|No) = 4/7  P(Married| Class=No)
P(Refund=Yes|Yes) = 0  P(Income=120K| Class=No)
P(Refund=No|Yes) = 1
= 4/7  4/7  0.0072 = 0.0024
P(Marital Status=Single|No) = 2/7
P(Marital Status=Divorced|No)=1/7  P(X|Class=Yes) = P(Refund=No| Class=Yes)
P(Marital Status=Married|No) = 4/7  P(Married| Class=Yes)
P(Marital Status=Single|Yes) = 2/7  P(Income=120K| Class=Yes)
P(Marital Status=Divorced|Yes)=1/7 = 1  0  1.2  10-9 = 0
P(Marital Status=Married|Yes) = 0

For taxable income: P(No) = 0.3, P(Yes) = 0.7


If class=No: sample mean=110
Since P(X|No)P(No) > P(X|Yes)P(Yes)
sample variance=2975
If class=Yes: sample mean=90 Therefore P(No|X) > P(Yes|X)
sample variance=25
=> Class = No
Naïve Bayes Classifier
• If one of the conditional probability is zero, then
the entire expression becomes zero
• Probability estimation:
N ac
Original : P( Ai  a | C  c) 
Nc Ni: number of attribute
values for attribute Ai
N ac  1
Laplace : P( Ai  a | C  c )  p: prior probability
Nc  Ni
m: parameter
N ac  mp
m - estimate : P( Ai  a | C  c) 
Nc  m
Example of Naïve Bayes Classifier
Given a Test Record: With Laplace Smoothing
X  ( Refund  No, Married, Income  120K)
naive Bayes Classifier:
P(Refund=Yes|No) = 4/9  P(X|Class=No) = P(Refund=No|Class=No)
P(Refund=No|No) = 5/9  P(Married| Class=No)
P(Refund=Yes|Yes) = 1/5  P(Income=120K| Class=No)
P(Refund=No|Yes) = 4/5
= 5/9  5/10  0.0072
P(Marital Status=Single|No) = 3/10
P(Marital Status=Divorced|No)=2/10  P(X|Class=Yes) = P(Refund=No| Class=Yes)
P(Marital Status=Married|No) = 5/10  P(Married| Class=Yes)
P(Marital Status=Single|Yes) = 3/6  P(Income=120K| Class=Yes)
P(Marital Status=Divorced|Yes)=2/6 = 4/5  1/6  1.2  10-9
P(Marital Status=Married|Yes) = 1/6

For taxable income: P(No) = 0.7, P(Yes) = 0.3


If class=No: sample mean=110
Since P(X|No)P(No) > P(X|Yes)P(Yes)
sample variance=2975
If class=Yes: sample mean=90 Therefore P(No|X) > P(Yes|X)
sample variance=25
=> Class = No
Implementation details
•  Computing the conditional probabilities involves
multiplication of many very small numbers
• Numbers get very close to zero, and there is a danger
of numeric instability
• We can deal with this by computing the logarithm
of the conditional probability
Naïve Bayes for Text Classification
•  Naïve Bayes is commonly used for text
classification
• For a document

• = Fraction of terms from all documents in c that


are.

• Easy to implement and works relatively well


• Limitation: Hard to incorporate additional features
(beyond words).
Naïve Bayes (Summary)
• Robust to isolated noise points

• Handle missing values by ignoring the instance during


probability estimate calculations

• Robust to irrelevant attributes

• Independence assumption may not hold for some attributes


• Use other techniques such as Bayesian Belief Networks (BBN)

• Naïve Bayes can produce a probability estimate, but it is


usually a very biased one
• Logistic Regression is better for obtaining probabilities.
Generative vs Discriminative models
• Naïve Bayes is a type of a generative model
• Generative process:
• First pick the category of the record
• Then given the category, generate the attribute values from the
distribution of the category

C
• Conditional independence given C

𝐴  1 𝐴  2 𝐴  𝑛
• We use the training data to learn the distribution of
the values in a class
Generative vs Discriminative models
• Logistic Regression and SVM are discriminative
models
• The goal is to find the boundary that discriminates
between the two classes from the training data

• In order to classify the language of a document,


you can
• Either learn the two languages and find which is more
likely to have generated the words you see
• Or learn what differentiates the two languages.

You might also like