Data Mining: Practical Machine Learning Tools and Techniques
Data Mining: Practical Machine Learning Tools and Techniques
2
Evaluation: the key to success
3
Issues in evaluation
4
Training and testing I
5
Training and testing II
6
Note on parameter tuning
7
Making the most of the data
8
Predicting performance
9
Confidence intervals
10
Mean and variance
• Mean and variance for a Bernoulli trial:
p, p (1–p)
• Expected success rate f=S/N
• Mean and variance for f : p, p (1–p)/N
• For large enough N, f follows a Normal distribution
• c% confidence interval [–z X z] for a random variable X is
determined using:
11
Confidence limits
• Confidence limits for the normal distribution with 0
mean and a variance of 1: Pr[X z] z
0.1% 3.09
0.5% 2.58
1% 2.33
5% 1.65
10% 1.28
20% 0.84
40% 0.25
• Thus:
• Resulting equation:
13
Examples
15
Repeated holdout method
16
Cross-validation
17
More on cross-validation
18
Leave-one-out cross-validation
• Leave-one-out:
a particular form of k-fold cross-validation:
• Set number of folds to number of training instances
• I.e., for n training instances, build classifier n times
• Makes best use of the data
• Involves no random subsampling
• Very computationally expensive (exception: using lazy
classifiers such as the nearest-neighbor classifier)
19
Leave-one-out CV and stratification
20
The bootstrap
21
The 0.632 bootstrap
22
Estimating error with the 0.632 bootstrap
23
More on the bootstrap
24
Hyperparameter selection
• Hyperparameter: parameter that can be tuned to optimize the
performance of a learning algorithm
• Different from basic parameter that is part of a model, such as a coefficient
in a linear regression model
• Example hyperparameter: k in the k-nearest neighbour classifier
• We are not allowed to peek at the final test data to choose the
value of this parameter
• Adjusting the hyperparameter to the test data will lead to optimistic
performance estimates on this test data!
• Parameter tuning needs to be viewed as part of the learning algorithm and
must be done using the training data only
• But how to get a useful estimate of performance for different
parameter values so that we can choose a value?
• Answer: split the data into a smaller “training” set and a validation set”
(normally, the data is shuffled first)
• Build models using different values of k on the new, smaller training set and
evaluate them on the validation set
• Pick the best value of k and rebuild the model on the full original training set
25
Hyperparameters and cross-validation
• Note that k-fold cross-validation runs k different train-test
evaluations
• The above parameter tuning process using validation sets must be
applied separately to each of the k training sets!
• This means that, when hyperparameter tuning is applied, k
different hyperparameter values may be selected
• This is OK: hyperparameter tuning is part of the learning process
• Cross-validation evaluates the quality of the learning process, not the
quality of a particular model
• What to do when the training sets are very small, so that
performance estimates on a validation set are unreliable?
• We can use nested cross-validation (expensive!)
• For each training set of the “outer” k-fold cross-validation, run “inner”
p-fold cross-validations to choose the best hyperparameter value
• Outer cross-validation is used to estimate quality of learning process
• Inner cross-validations are used to choose hyperparameter values
• Inner cross-validations are part of the learning process! 26
Comparing machine learning schemes
27
Comparing learning schemes II
• Want to show that scheme A is better than scheme B in
a particular domain
• For a given amount of training data (i.e., data size)
• On average, across all possible training sets from that domain
• Let's assume we have an infinite amount of data from
the domain
• Then, we can simply
• sample infinitely many dataset of a specified size
• obtain a cross-validation estimate on each dataset for each
scheme
• check if the mean accuracy for scheme A is better than the
mean accuracy for scheme B
28
Paired t-test
• In practice, we have limited data and a limited
number of estimates for computing the mean
• Student’s t-test tells us whether the means of two
samples are significantly different
• In our case the samples are cross-validation
estimates, one for each dataset we have sampled
• We can use a paired t-test because the individual
samples are paired
• The same cross-validation is applied twice, ensuring that
all the training and test sets are exactly the same
William Gosset
Born: 1876 in Canterbury; Died: 1937 in Beaconsfield, England
Obtained a post as a chemist in the Guinness brewery in Dublin in 1899.
Invented the t-test to handle small samples for quality control in
brewing. Wrote under the name "Student". 29
Distribution of the means
• x1 ,x2 ,…, xk and y1 ,y2 ,… ,yk are the 2k samples for the k
different datasets
• mx and my are the means
• With enough samples, the mean of a set of independent
samples is normally distributed
(central limit theorem from statistics)
• Estimated variances of the means are
σx2/k and σy2/k
30
Student’s distribution
• With small sample sizes (k < 100) the mean follows
Student’s distribution with k–1 degrees of freedom
• Confidence limits from Student’s distribution:
Pr[X z] z Pr[X z] z
0.1% 4.30 0.1% 3.09
Assuming 0.5% 3.25 0.5% 2.58
31
Distribution of the differences
• Let md = mx – my
• The difference of the means (md) also has a Student’s
distribution with k–1 degrees of freedom
• Let σd2 be the variance of the differences
• The standardized version of md is called the t-statistic:
32
Performing the test
33
Unpaired observations
34
Dependent estimates
• Unfortunately, we have assumed that we have enough data
to create several datasets of the desired size
• Need to re-use data if that is not the case
• E.g., running cross-validations with different randomizations on
the same data
• Samples become dependent insignificant differences can
become significant
• A heuristic test that tries to combat this is the corrected
resampled t-test
• Assume we use the repeated hold-out method with k runs, with n1
instances for training and n2 for testing
• The new test statistic is:
35
Predicting probabilities
36
Quadratic loss function
• Want to minimize
37
Informational loss function
• The informational loss function is –log(pc),
where c is the index of the instance’s actual class
• Number of bits required to communicate the actual class
• Let p1* … pk* be the true class probabilities
• Then the expected value for the loss function is:
39
Counting the cost
40
Counting the cost
Predicted class
Yes No
Actual class Yes True positive False negative
No False positive True negative
41
Aside: the kappa statistic
• Two confusion matrices for a 3-class problem:
actual predictor (left) vs. random predictor (right)
43
Cost-sensitive classification
• Can take costs into account when making predictions
• Basic idea: only predict high-cost class when very confident
about prediction
• Given: predicted class probabilities
• Normally, we just predict the most likely class
• Here, we should make the prediction that minimizes the
expected cost
• Expected cost: dot product of vector of class probabilities and
appropriate column in cost matrix
• Choose column (class) that minimizes expected cost
• This is the minimum-expected cost approach to cost-sensitive
classification
44
Cost-sensitive learning
• So far we haven't taken costs into account at training time
• Most learning schemes do not perform cost-sensitive
learning
• They generate the same classifier no matter what costs are
assigned to the different classes
• Example: standard decision tree learner
• Simple methods for cost-sensitive learning:
• Resampling of instances according to costs
• Weighting of instances according to costs
• Some schemes can take costs into account by varying a
parameter, e.g., naïve Bayes
45
Lift charts
46
Generating a lift chart
• Sort instances based on predicted probability of being positive:
47
A hypothetical lift chart
49
Interactive cost-benefit analysis: Example II
50
ROC curves
51
A sample ROC curve
53
ROC curves for two schemes
55
More measures...
• Percentage of retrieved documents that are relevant:
precision=TP/(TP+FP)
• Percentage of relevant documents that are returned:
recall =TP/(TP+FN)
• Precision/recall curves have hyperbolic shape
• Summary measures: average precision at 20%, 50% and
80% recall (three-point average recall)
• F-measure=(2 × recall × precision)/(recall+precision)
• sensitivity × specificity = (TP / (TP + FN)) × (TN / (FP + TN))
• Area under the ROC curve (AUC):
probability that randomly chosen positive instance is
ranked above randomly chosen negative one
56
Summary of some measures
57
Cost curves
• Cost curves plot expected costs directly
• Example for case with uniform costs (i.e., error):
58
Cost curves: example with costs
59
Evaluating numeric prediction
60
Other measures
• The root mean-squared error :
61
Improvement on the mean
62
Correlation coefficient
63
Which measure?
• Best to look at all of them
• Often it doesn’t matter
• Example:
A B C D
Root mean-squared error 67.8 91.7 63.3 57.4
Mean absolute error 41.3 38.5 33.4 29.2
Root rel squared error 42.2% 57.2% 39.4% 35.8%
Relative absolute error 43.1% 40.1% 34.8% 30.4%
Correlation coefficient 0.88 0.88 0.89 0.91
• D best
• C second-best
• A, B arguable
64
The MDL principle
• MDL stands for minimum description length
• The description length is defined as:
space required to describe a theory
+
space required to describe the theory’s mistakes
• In our case the theory is the classifier and the mistakes
are the errors on the training data
• Aim: we seek a classifier with minimal DL
• MDL principle is a model selection criterion
• Enables us to choose a classifier of an appropriate
complexity to combat overfitting
65
Model selection criteria
• Model selection criteria attempt to find a good compromise
between:
• The complexity of a model
• Its prediction accuracy on the training data
• Reasoning: a good model is a simple model that achieves
high accuracy on the given data
• Also known as Occam’s Razor :
the best theory is the smallest one
that describes all the facts
William of Ockham, born in the village of Ockham in Surrey
(England) about 1285, was the most influential philosopher
of the 14th century and a controversial theologian.
66
Elegance vs. errors
67
MDL and compression
• MDL principle relates to data compression:
• The best theory is the one that compresses the data the most
• In supervised learning, to compress the labels of a dataset, we
generate a model and then store the model and its mistakes
• We need to compute
(a) the size of the model, and
(b) the space needed to encode the errors
• (b) easy: use the informational loss function
• (a) need a method to encode the model
68
MDL and Bayes’s theorem
• Equivalent to:
constant
69
MDL and MAP
70
Discussion of MDL principle
71
MDL and clustering
72
Using a validation set for model selection
• MDL principle is one example of a model selection criterion
• Model selection: finding the right model complexity
• Classic model selection problem in statistics:
• Finding the subset of attributes to use in a linear regression model
(yes, linear regression can overfit!)
• Other model selection problems: choosing the size of a
decision tree or artificial neural network
• Many model selection criteria exist, based on a variety of
theoretical assumptions
• Simple model selection approach: use a validation set
• Use the model complexity that yields best performance on the
validation set
• Alternative approach when data is scarce: internal cross-
validation
73