0% found this document useful (0 votes)
41 views

Univt - IV

The document outlines topics related to classification algorithms including basics, methods, and advanced topics. It discusses classification tasks such as predicting tumor cells and classifying transactions. Metrics for evaluating classification performance are described, such as accuracy, precision, recall and F-measure. Popular classification techniques like k-nearest neighbors, decision trees, naive Bayes, and support vector machines are mentioned. Methods for estimating classifier performance like holdout, cross-validation and bootstrap are also covered.

Uploaded by

mananrawat537
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

Univt - IV

The document outlines topics related to classification algorithms including basics, methods, and advanced topics. It discusses classification tasks such as predicting tumor cells and classifying transactions. Metrics for evaluating classification performance are described, such as accuracy, precision, recall and F-measure. Popular classification techniques like k-nearest neighbors, decision trees, naive Bayes, and support vector machines are mentioned. Methods for estimating classifier performance like holdout, cross-validation and bootstrap are also covered.

Uploaded by

mananrawat537
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Outline

• Basics
– Problem, goal, evaluation
• Methods
– Nearest Neighbor
– Decision Tree
– Naïve Bayes
– Rule-based Classification
– Logistic Regression
– Support Vector Machines
– Ensemble methods
– ………
• Advanced topics
– Semi-supervised Learning
– Multi-view Learning
– Transfer Learning
– ……
2
Readings

• Tan, Steinbach, Kumar, Chapters 4 and 5.


• Han, Kamber, Pei. Data Mining: Concepts and Techniques.
Chapters 8 and 9.
• Additional readings posted on website

3
Classification: Definition
• Given a collection of records (training set )
– Each record contains a set of attributes, one of the
attributes is the class.
• Find a model for class attribute as a function
of the values of other attributes.
• Goal: previously unseen records should be
assigned a class as accurately as possible.
– A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into training
and test sets, with training set used to build the model
and test set used to validate it.
Illustrating Classification Task
Tid Attrib1 Attrib2 Attrib3 Class Learning
1 Yes Large 125K No
algorithm
2 No Medium 100K No

3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ? Deduction


14 No Small 95K ?

15 No Large 67K ?
10

Test Set

5
Examples of Classification Task

• Predicting tumor cells as benign or malignant

• Classifying credit card transactions as


legitimate or fraudulent

• Classifying emails as spams or normal emails

• Categorizing news stories as finance, weather,


entertainment, sports, etc
6
Metrics for Performance Evaluation
• Focus on the predictive capability of a model
– Rather than how fast it takes to classify or build
models, scalability, etc.
• Confusion Matrix:
PREDICTED CLASS

Class=Yes Class=No
a: TP (true positive)

Class=Yes a b b: FN (false negative)


ACTUAL
c: FP (false positive)
CLASS Class=No c d d: TN (true negative)

7
Metrics for Performance Evaluation

PREDICTED CLASS

Class=Yes Class=No

Class=Yes a b
ACTUAL (TP) (FN)
CLASS Class=No c d
(FP) (TN)

• Most widely-used metric:


ad TP  TN
Accuracy  
a  b  c  d TP  TN  FP  FN
8
Limitation of Accuracy

• Consider a 2-class problem


– Number of Class 0 examples = 9990
– Number of Class 1 examples = 10

• If model predicts everything to be class 0,


accuracy is 9990/10000 = 99.9 %
– Accuracy is misleading because model does not
detect any class 1 example

9
Cost-Sensitive Measures

a
Precision (p) 
ac
a
Recall (r) 
ab
2rp 2a
F - measure (F)  
r  p 2a  b  c

10
Methods of Estimation
• Holdout
– Reserve 2/3 for training and 1/3 for testing
• Random subsampling
– Repeated holdout
• Cross validation
– Partition data into k disjoint subsets
– k-fold: train on k-1 partitions, test on the remaining one
– Leave-one-out: k=n
• Stratified sampling
– oversampling vs undersampling
• Bootstrap
– Sampling with replacement
11
Classification Techniques

• Nearest Neighbor
• Decision Tree
• Naïve Bayes
• Rule-based Classification
• Logistic Regression
• Support Vector Machines
• Ensemble methods
• ……

12
Nearest Neighbor Classifiers
• Store the training records
Set of Stored Cases • Use training records to
……...
predict the class label of
Atr1 AtrN Class
unseen cases
A
B Unseen Case
B Atr1 ……... AtrN
C
A
C
B
13
Nearest-Neighbor Classifiers
Unknown record  Requires three things
– The set of stored records
– Distance Metric to compute
distance between records
– The value of k, the number of
nearest neighbors to retrieve

 To classify an unknown record:


– Compute distance to other
training records
– Identify k nearest neighbors
– Use class labels of nearest
neighbors to determine the
class label of unknown record
(e.g., by taking majority vote)

14
Definition of Nearest Neighbor

X X X

(a) 1-nearest neighbor (b) 2-nearest neighbor (c) 3-nearest neighbor

K-nearest neighbors of a record x are data points that


have the k smallest distance to x

15
1 nearest-neighbor
Voronoi Diagram

16
Nearest Neighbor Classification

• Compute distance between two points:


– Euclidean distance
d ( p, q )   ( pi
i
q )
i
2

• Determine the class from nearest neighbor list


– take the majority vote of class labels among the k-
nearest neighbors
– Weigh the vote according to distance
• weight factor, w = 1/d2

17
Nearest Neighbor Classification

• Choosing the value of k:


– If k is too small, sensitive to noise points
– If k is too large, neighborhood may include points from
other classes

18
Nearest Neighbor Classification

• Scaling issues
– Attributes may have to be scaled to prevent
distance measures from being dominated by one
of the attributes
– Example:
• height of a person may vary from 1.5m to 1.8m
• weight of a person may vary from 90lb to 300lb
• income of a person may vary from $10K to $1M

19
Nearest neighbor Classification

• k-NN classifiers are lazy learners


– It does not build models explicitly
– Different from eager learners such as decision tree
induction
– Classifying unknown records are relatively
expensive

20
Example of a Decision Tree

Splitting Attributes
Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No Refund
No
Yes No
3 No Single 70K
4 Yes Married 120K No NO MarSt
5 No Divorced 95K Yes Married
Single, Divorced
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10

Training Data Model: Decision Tree

21
Another Example of Decision Tree

MarSt Single,
Married Divorced
Tid Refund Marital Taxable
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that fits
10 No Single 90K Yes the same data!
10

22
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply Decision
Tid Attrib1 Attrib2 Attrib3 Class
Model Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set

23
Apply Model to Test Data
Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES

24
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES

25
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES

26
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES

27
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES

28
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES

29
Decision Tree Classification Task

Tid Attrib1 Attrib2 Attrib3 Class


Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Decision
Model
Tid Attrib1 Attrib2 Attrib3 Class
Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set

30
Decision Tree Induction

• Many Algorithms:
– Hunt’s Algorithm (one of the earliest)
– CART
– ID3, C4.5
– SLIQ,SPRINT
– ……

31
General Structure of Hunt’s Algorithm
• Let Dt be the set of training
Tid Refund Marital Taxable
Status Income Cheat

records that reach a node t 1 Yes Single 125K No

• General Procedure:
2 No Married 100K No
3 No Single 70K No

– If Dt contains records that 4 Yes Married 120K No


5 No Divorced 95K Yes
belong the same class yt, 6 No Married 60K No
then t is a leaf node labeled 7 Yes Divorced 220K No

as yt 8 No Single 85K Yes


9 No Married 75K No
– If Dt contains records that 10 No Single 90K Yes
belong to more than one
10

Dt
class, use an attribute to split
the data into smaller
subsets. Recursively apply ?
the procedure to each
subset
32
Hunt’s Algorithm Tid Refund Marital Taxable
Status Income Cheat

Refund 1 Yes Single 125K No

Yes No 2 No Married 100K No


3 No Single 70K No
Don’t
Cheat 4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
Refund Refund
8 No Single 85K Yes
Yes No Yes No
9 No Married 75K No
Don’t Don’t Marital
Marital 10 No Single 90K Yes
Cheat Status
Cheat Status 10

Single, Single,
Married Married
Divorced Divorced
Don’t Taxable Don’t
Cheat Income Cheat
< 80K >= 80K
Don’t Cheat
Cheat
33
Tree Induction

• Greedy strategy
– Split the records based on an attribute test that
optimizes certain criterion

• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting

34
How to Specify Test Condition?

• Depends on attribute types


– Nominal
– Ordinal
– Continuous

• Depends on number of ways to split


– 2-way split
– Multi-way split

35
Splitting Based on Nominal Attributes

• Multi-way split: Use as many partitions as


distinct values
CarType
Family Luxury
Sports

• Binary split: Divides values into two subsets


Need to find optimal partitioning
CarType CarType
{Sports, {Family,
Luxury} {Family} OR Luxury} {Sports}

36
Splitting Based on Ordinal Attributes
• Multi-way split: Use as many partitions as
distinct values.
Size
Small Large
Medium

• Binary split: Divides values into two subsets


Need to find optimal partitioning
Size Size
{Small,
{Large}
OR {Medium,
{Small}
Medium} Large}

Size
• What about this split? {Small,
Large} {Medium}

37
Splitting Based on Continuous Attributes

• Different ways of handling


– Discretization to form an ordinal categorical
attribute

– Binary Decision: (A < v) or (A  v)


• consider all possible splits and finds the best cut
• can be more computation intensive

38
Splitting Based on Continuous Attributes

Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No

[10K,25K) [25K,50K) [50K,80K)

(i) Binary split (ii) Multi-way split

39
Tree Induction

• Greedy strategy
– Split the records based on an attribute test that
optimizes certain criterion.

• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting

40
How to determine the Best Split
Before Splitting: 10 records of class 0,
10 records of class 1

On- Car Student


Campus? Type? ID?

Yes No Family Luxury c1 c20


c10 c11
Sports
C0: 6 C0: 4 C0: 1 C0: 8 C0: 1 C0: 1 ... C0: 1 C0: 0 ... C0: 0
C1: 4 C1: 6 C1: 3 C1: 0 C1: 7 C1: 0 C1: 0 C1: 1 C1: 1

Which test condition is the best?

41
How to determine the Best Split

• Greedy approach:
– Nodes with homogeneous class distribution are
preferred
• Need a measure of node impurity:
C0: 5 C0: 9
C1: 5 C1: 1

Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity

42
How to Find the Best Split
Before Splitting: C0 N00 M0
C1 N01

A? B?
Yes No Yes No

Node N1 Node N2 Node N3 Node N4

C0 N10 C0 N20 C0 N30 C0 N40


C1 N11 C1 N21 C1 N31 C1 N41

M1 M2 M3 M4

M12 M34
Gain = M0 – M12 vs M0 – M34
43
Measures of Node Impurity

• Gini Index

• Entropy

• Misclassification error

44
Measure of Impurity: GINI
• Gini Index for a given node t :

GINI (t )  1  [ p( j | t )]2
j

(NOTE: p( j | t) is the relative frequency of class j at node t).

– Maximum (1 - 1/nc) when records are equally distributed


among all classes, implying least interesting information
– Minimum (0) when all records belong to one class, implying
most useful information

C1 0 C1 1 C1 2 C1 3
C2 6 C2 5 C2 4 C2 3
Gini=0.000 Gini=0.278 Gini=0.444 Gini=0.500

45
Examples for computing GINI
GINI (t )  1  [ p( j | t )]2
j

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Gini = 1 – (1/6)2 – (5/6)2 = 0.278

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Gini = 1 – (2/6)2 – (4/6)2 = 0.444

46
Splitting Based on GINI

• Used in CART, SLIQ, SPRINT.


• When a node p is split into k partitions (children), the quality of
split is computed as,
k
ni
GINI split   GINI (i)
i 1 n

where, ni = number of records at child i,


n = number of records at node p.

47
Binary Attributes: Computing GINI Index

 Splits into two partitions


 Effect of Weighing partitions:
– Larger and Purer Partitions are sought for
Parent
B? C1 6
Yes No C2 6
Gini = 0.500
Node N1 Node N2
Gini(N1)
= 1 – (5/7)2 – (2/7)2
N1 N2 Gini(Children)
= 0.408
C1 5 1 = 7/12 * 0.408 +
Gini(N2) C2 2 4 5/12 * 0.32
= 1 – (1/5)2 – (4/5)2 Gini=0.333 = 0.371
= 0.32
48
Entropy
• Entropy at a given node t:
Entropy(t )   p( j | t ) log p( j | t )
j

(NOTE: p( j | t) is the relative frequency of class j at node t).


– Measures purity of a node
• Maximum (log nc) when records are equally
distributed among all classes implying least
information
• Minimum (0.0) when all records belong to one
class, implying most information
49
Examples for computing Entropy

Entropy(t )   p( j | t ) log p( j | t )
j 2

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92

50
Splitting Based on Information Gain

• Information Gain:
 n 
 Entropy( p)  
k
GAIN Entropy(i ) 
i

 n 
split i 1

Parent Node, p is split into k partitions;


ni is number of records in partition i
– Measures reduction in entropy achieved because
of the split. Choose the split that achieves most
reduction (maximizes GAIN)
– Used in ID3 and C4.5

51
Splitting Criteria based on Classification Error

• Classification error at a node t :

Error(t )  1  max P(i | t )


i

• Measures misclassification error made by a node.


• Maximum (1 - 1/nc) when records are equally
distributed among all classes, implying least
interesting information
• Minimum (0.0) when all records belong to one
class, implying most interesting information

52
Examples for Computing Error

Error(t )  1  max P(i | t )


i

C1 0 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1


C2 6 Error = 1 – max (0, 1) = 1 – 1 = 0

C1 1 P(C1) = 1/6 P(C2) = 5/6


C2 5 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6

C1 2 P(C1) = 2/6 P(C2) = 4/6


C2 4 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3

53
Comparison among Splitting Criteria
For a 2-class problem:

54
Tree Induction

• Greedy strategy
– Split the records based on an attribute test that
optimizes certain criterion.

• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting

55
Stopping Criteria for Tree Induction

• Stop expanding a node when all the records


belong to the same class

• Stop expanding a node when all the records


have similar attribute values

• Early termination (to be discussed later)

56
Decision Tree Based Classification

• Advantages:
– Inexpensive to construct
– Extremely fast at classifying unknown records
– Easy to interpret for small-sized trees
– Accuracy is comparable to other classification
techniques for many simple data sets

57
Underfitting and Overfitting (Example)

500 circular and 500


triangular data points.

Circular points:
0.5  sqrt(x12+x22)  1

Triangular points:
sqrt(x12+x22) > 0.5 or
sqrt(x12+x22) < 1

58
Underfitting and Overfitting

Overfitting

59
Occam’s Razor

• Given two models of similar errors, one


should prefer the simpler model over the
more complex model

• For complex models, there is a greater chance


that it was fitted accidentally by errors in data

• Therefore, one should include model


complexity when evaluating a model
60
How to Address Overfitting
• Pre-Pruning (Early Stopping Rule)
– Stop the algorithm before it becomes a fully-grown tree
– Typical stopping conditions for a node:
• Stop if all instances belong to the same class
• Stop if all the attribute values are the same
– More restrictive conditions:
• Stop if number of instances is less than some user-specified threshold
• Stop if class distribution of instances are independent of the available
features
• Stop if expanding the current node does not improve impurity
measures (e.g., Gini or information gain).

61
How to Address Overfitting

• Post-pruning
– Grow decision tree to its entirety
– Trim the nodes of the decision tree in a bottom-up
fashion
– If generalization error improves after trimming,
replace sub-tree by a leaf node.
– Class label of leaf node is determined from
majority class of instances in the sub-tree

62
Handling Missing Attribute Values

• Missing values affect decision tree


construction in three different ways:
– Affects how impurity measures are computed
– Affects how to distribute instance with missing
value to child nodes
– Affects how a test instance with missing value is
classified

63
Computing Impurity Measure
Tid Refund Marital Taxable Before Splitting:
Status Income Class Entropy(Parent)
= -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
1 Yes Single 125K No
2 No Married 100K No Class Class
3 No Single 70K No = Yes = No
Refund=Yes 0 3
4 Yes Married 120K No
Refund=No 2 4
5 No Divorced 95K Yes
Refund=? 1 0
6 No Married 60K No
Split on Refund:
7 Yes Divorced 220K No
8 No Single 85K Yes Entropy(Refund=Yes) = 0
9 No Married 75K No Entropy(Refund=No)
10 ? Single 90K Yes = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183
10

Entropy(Children)
Missing = 0.3 (0) + 0.6 (0.9183) = 0.551
value
Gain = 0.9  (0.8813 – 0.551) = 0.3303
64
Distribute Instances
Tid Refund Marital Taxable
Status Income Class
Tid Refund Marital Taxable
1 Yes Single 125K No Status Income Class
2 No Married 100K No
10 ? Single 90K Yes
3 No Single 70K No 10

4 Yes Married 120K No


Refund
5 No Divorced 95K Yes Yes No
6 No Married 60K No
Class=Yes 0 + 3/9 Class=Yes 2 + 6/9
7 Yes Divorced 220K No
Class=No 3 Class=No 4
8 No Single 85K Yes
9 No Married 75K No
10

Probability that Refund=Yes is 3/9


Refund
Yes No Probability that Refund=No is 6/9
Assign record to the left child with
Class=Yes 0 Cheat=Yes 2 weight = 3/9 and to the right child with
Class=No 3 Cheat=No 4 weight = 6/9

65
Classify Instances
New record: Married Single Divorced Total
Tid Refund Marital Taxable
Status Income Class Class=No 3 1 0 4
11 No ? 85K ?
10

Class=Yes 6/9 1 1 2.67

Total 3.67 2 1 6.67


Refund
Yes
No
NO MarSt
Single,
Married Probability that Marital Status
Divorced
= Married is 3.67/6.67
TaxInc NO
< 80K > 80K Probability that Marital Status
={Single,Divorced} is 3/6.67
NO YES

66
Other Issues

• Data Fragmentation
• Search Strategy
• Expressiveness
• Tree Replication

67
Data Fragmentation

• Number of instances gets smaller as you


traverse down the tree

• Number of instances at the leaf nodes could


be too small to make any statistically
significant decision

68
Search Strategy

• Finding an optimal decision tree is NP-hard

• The algorithm presented so far uses a greedy,


top-down, recursive partitioning strategy to
induce a reasonable solution

• Other strategies?
– Bottom-up
– Bi-directional
69
Expressiveness

• Decision tree provides expressive representation for learning


discrete-valued function
– But they do not generalize well to certain types of Boolean
functions
• Example: parity function:
– Class = 1 if there is an even number of Boolean attributes with truth
value = True
– Class = 0 if there is an odd number of Boolean attributes with truth
value = True
• For accurate modeling, must have a complete tree

• Not expressive enough for modeling continuous variables


– Particularly when test condition involves only a single
attribute at-a-time

70
Decision Boundary
1

0.9

0.8
x < 0.43?

0.7
Yes No
0.6
y

0.5 y < 0.47? y < 0.33?


0.4

0.3
Yes No Yes No

0.2
:4 :0 :0 :4
0.1 :0 :4 :3 :0
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x
• Border line between two neighboring regions of different classes is known
as decision boundary
• Decision boundary is parallel to axes because test condition involves a
single attribute at-a-time
71
Oblique Decision Trees

x+y<1

Class = + Class =

• Test condition may involve multiple attributes


• More expressive representation
• Finding optimal test condition is computationally expensive
72
Take-away Message

• What’s classification?
• How to evaluate classification model?
• How to use decision tree to make predictions?
• How to construct a decision tree from training data?
• How to compute gini index, entropy, misclassification
error?
• How to avoid overfitting by pre-pruning or post-
pruning decision tree?

73

You might also like