Univt - IV
Univt - IV
• Basics
– Problem, goal, evaluation
• Methods
– Nearest Neighbor
– Decision Tree
– Naïve Bayes
– Rule-based Classification
– Logistic Regression
– Support Vector Machines
– Ensemble methods
– ………
• Advanced topics
– Semi-supervised Learning
– Multi-view Learning
– Transfer Learning
– ……
2
Readings
3
Classification: Definition
• Given a collection of records (training set )
– Each record contains a set of attributes, one of the
attributes is the class.
• Find a model for class attribute as a function
of the values of other attributes.
• Goal: previously unseen records should be
assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into training
and test sets, with training set used to build the model
and test set used to validate it.
Illustrating Classification Task
Tid Attrib1 Attrib2 Attrib3 Class Learning
1 Yes Large 125K No
algorithm
2 No Medium 100K No
3 No Small 70K No
6 No Medium 60K No
Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?
15 No Large 67K ?
10
Test Set
5
Examples of Classification Task
Class=Yes Class=No
a: TP (true positive)
7
Metrics for Performance Evaluation
PREDICTED CLASS
Class=Yes Class=No
Class=Yes a b
ACTUAL (TP) (FN)
CLASS Class=No c d
(FP) (TN)
9
Cost-Sensitive Measures
a
Precision (p)
ac
a
Recall (r)
ab
2rp 2a
F - measure (F)
r p 2a b c
10
Methods of Estimation
• Holdout
– Reserve 2/3 for training and 1/3 for testing
• Random subsampling
– Repeated holdout
• Cross validation
– Partition data into k disjoint subsets
– k-fold: train on k-1 partitions, test on the remaining one
– Leave-one-out: k=n
• Stratified sampling
– oversampling vs undersampling
• Bootstrap
– Sampling with replacement
11
Classification Techniques
• Nearest Neighbor
• Decision Tree
• Naïve Bayes
• Rule-based Classification
• Logistic Regression
• Support Vector Machines
• Ensemble methods
• ……
12
Nearest Neighbor Classifiers
• Store the training records
Set of Stored Cases • Use training records to
……...
predict the class label of
Atr1 AtrN Class
unseen cases
A
B Unseen Case
B Atr1 ……... AtrN
C
A
C
B
13
Nearest-Neighbor Classifiers
Unknown record Requires three things
– The set of stored records
– Distance Metric to compute
distance between records
– The value of k, the number of
nearest neighbors to retrieve
14
Definition of Nearest Neighbor
X X X
15
1 nearest-neighbor
Voronoi Diagram
16
Nearest Neighbor Classification
17
Nearest Neighbor Classification
18
Nearest Neighbor Classification
• Scaling issues
– Attributes may have to be scaled to prevent
distance measures from being dominated by one
of the attributes
– Example:
• height of a person may vary from 1.5m to 1.8m
• weight of a person may vary from 90lb to 300lb
• income of a person may vary from $10K to $1M
19
Nearest neighbor Classification
20
Example of a Decision Tree
Splitting Attributes
Tid Refund Marital Taxable
Status Income Cheat
21
Another Example of Decision Tree
MarSt Single,
Married Divorced
Tid Refund Marital Taxable
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that fits
10 No Single 90K Yes the same data!
10
22
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No
6 No Medium 60K No
Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?
15 No Large 67K ?
10
Test Set
23
Apply Model to Test Data
Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
24
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
25
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
26
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
27
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married
TaxInc NO
< 80K > 80K
NO YES
28
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat
No Married 80K ?
Refund 10
Yes No
NO MarSt
Single, Divorced Married Assign Cheat to “No”
TaxInc NO
< 80K > 80K
NO YES
29
Decision Tree Classification Task
6 No Medium 60K No
Training Set
Apply
Decision
Model
Tid Attrib1 Attrib2 Attrib3 Class
Tree
11 No Small 55K ?
15 No Large 67K ?
10
Test Set
30
Decision Tree Induction
• Many Algorithms:
– Hunt’s Algorithm (one of the earliest)
– CART
– ID3, C4.5
– SLIQ,SPRINT
– ……
31
General Structure of Hunt’s Algorithm
• Let Dt be the set of training
Tid Refund Marital Taxable
Status Income Cheat
• General Procedure:
2 No Married 100K No
3 No Single 70K No
Dt
class, use an attribute to split
the data into smaller
subsets. Recursively apply ?
the procedure to each
subset
32
Hunt’s Algorithm Tid Refund Marital Taxable
Status Income Cheat
Single, Single,
Married Married
Divorced Divorced
Don’t Taxable Don’t
Cheat Income Cheat
< 80K >= 80K
Don’t Cheat
Cheat
33
Tree Induction
• Greedy strategy
– Split the records based on an attribute test that
optimizes certain criterion
• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
34
How to Specify Test Condition?
35
Splitting Based on Nominal Attributes
36
Splitting Based on Ordinal Attributes
• Multi-way split: Use as many partitions as
distinct values.
Size
Small Large
Medium
Size
• What about this split? {Small,
Large} {Medium}
37
Splitting Based on Continuous Attributes
38
Splitting Based on Continuous Attributes
Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No
39
Tree Induction
• Greedy strategy
– Split the records based on an attribute test that
optimizes certain criterion.
• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
40
How to determine the Best Split
Before Splitting: 10 records of class 0,
10 records of class 1
41
How to determine the Best Split
• Greedy approach:
– Nodes with homogeneous class distribution are
preferred
• Need a measure of node impurity:
C0: 5 C0: 9
C1: 5 C1: 1
Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity
42
How to Find the Best Split
Before Splitting: C0 N00 M0
C1 N01
A? B?
Yes No Yes No
M1 M2 M3 M4
M12 M34
Gain = M0 – M12 vs M0 – M34
43
Measures of Node Impurity
• Gini Index
• Entropy
• Misclassification error
44
Measure of Impurity: GINI
• Gini Index for a given node t :
GINI (t ) 1 [ p( j | t )]2
j
C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500
45
Examples for computing GINI
GINI (t ) 1 [ p( j | t )]2
j
46
Splitting Based on GINI
47
Binary Attributes: Computing GINI Index
Entropy(t ) p( j | t ) log p( j | t )
j 2
50
Splitting Based on Information Gain
• Information Gain:
n
Entropy( p)
k
GAIN Entropy(i )
i
n
split i 1
51
Splitting Criteria based on Classification Error
52
Examples for Computing Error
53
Comparison among Splitting Criteria
For a 2-class problem:
54
Tree Induction
• Greedy strategy
– Split the records based on an attribute test that
optimizes certain criterion.
• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
55
Stopping Criteria for Tree Induction
56
Decision Tree Based Classification
• Advantages:
– Inexpensive to construct
– Extremely fast at classifying unknown records
– Easy to interpret for small-sized trees
– Accuracy is comparable to other classification
techniques for many simple data sets
57
Underfitting and Overfitting (Example)
Circular points:
0.5 sqrt(x12+x22) 1
Triangular points:
sqrt(x12+x22) > 0.5 or
sqrt(x12+x22) < 1
58
Underfitting and Overfitting
Overfitting
59
Occam’s Razor
61
How to Address Overfitting
• Post-pruning
– Grow decision tree to its entirety
– Trim the nodes of the decision tree in a bottom-up
fashion
– If generalization error improves after trimming,
replace sub-tree by a leaf node.
– Class label of leaf node is determined from
majority class of instances in the sub-tree
62
Handling Missing Attribute Values
63
Computing Impurity Measure
Tid Refund Marital Taxable Before Splitting:
Status Income Class Entropy(Parent)
= -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
1 Yes Single 125K No
2 No Married 100K No Class Class
3 No Single 70K No = Yes = No
Refund=Yes 0 3
4 Yes Married 120K No
Refund=No 2 4
5 No Divorced 95K Yes
Refund=? 1 0
6 No Married 60K No
Split on Refund:
7 Yes Divorced 220K No
8 No Single 85K Yes Entropy(Refund=Yes) = 0
9 No Married 75K No Entropy(Refund=No)
10 ? Single 90K Yes = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183
10
Entropy(Children)
Missing = 0.3 (0) + 0.6 (0.9183) = 0.551
value
Gain = 0.9 (0.8813 – 0.551) = 0.3303
64
Distribute Instances
Tid Refund Marital Taxable
Status Income Class
Tid Refund Marital Taxable
1 Yes Single 125K No Status Income Class
2 No Married 100K No
10 ? Single 90K Yes
3 No Single 70K No 10
65
Classify Instances
New record: Married Single Divorced Total
Tid Refund Marital Taxable
Status Income Class Class=No 3 1 0 4
11 No ? 85K ?
10
66
Other Issues
• Data Fragmentation
• Search Strategy
• Expressiveness
• Tree Replication
67
Data Fragmentation
68
Search Strategy
• Other strategies?
– Bottom-up
– Bi-directional
69
Expressiveness
70
Decision Boundary
1
0.9
0.8
x < 0.43?
0.7
Yes No
0.6
y
0.3
Yes No Yes No
0.2
:4 :0 :0 :4
0.1 :0 :4 :3 :0
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x
• Border line between two neighboring regions of different classes is known
as decision boundary
• Decision boundary is parallel to axes because test condition involves a
single attribute at-a-time
71
Oblique Decision Trees
x+y<1
Class = + Class =
• What’s classification?
• How to evaluate classification model?
• How to use decision tree to make predictions?
• How to construct a decision tree from training data?
• How to compute gini index, entropy, misclassification
error?
• How to avoid overfitting by pre-pruning or post-
pruning decision tree?
73