SlideShare a Scribd company logo
Credibility: Evaluating what’s Been Learned
Training and TestingWe measure the success of a classification procedure by using error rates (or equivalent success rates)Measuring success rate using training set is highly optimisticThe error rate on training set is called resubstitution errorWe have a separate test set for calculating success errorTest set should be independent of the training setAlso some time to improve our classification technique we use a validation setWhen we hold out some part of training set for testing (which is now not used for training), this process is called holdout procedure
Predicting performanceExpected success rate = 100 – error rate (If error rate is also in percentage)We want the true success rateCalculation of true success rateSuppose we have expected success rate(f) = s/n, where s is the number of success out of a total n instancesFor large value of n, f follows a normal distributionNow we will predict the true success rate (p) based on the  confidence percentage we want For example say our f = 75%, then p will lie in [73.2%,76.7%] with 80% confidence
Predicting performanceNow using properties of statistics we know that the mean of f is p and the variance is p(1-p)/nTo use normal distribution we will have to make the mean of f = 0 and standard deviation = 1 So suppose our confidence = c% and we want to calculate pWe will use the two tailed property of normal distributionAnd also that the are covered by normal distribution is taken as 100% so the are we will leave is 100 - c
Predicting performanceFinally after all the manipulations we have ,true success rate as:Here,                        p -> true success rate                        f - > expected success rate                        N -> Number of instances                         Z -> Factor derived from a normal distribution table using the  100-c measure
Cross validationWe use cross validation when amount of data is small and we need to have independent training and test set from itIt is important that each class is represented in its actual proportions in the training and test set: Stratification An important cross validation technique is stratified 10 fold cross validation, where the instance set is divided into 10 foldsWe have 10 iterations with taking a different single fold for testing and the rest 9 folds for training, averaging the error of the 10 iterationsProblem: Computationally intensive
Other estimatesLeave-one-out:StepsOne instance is left for testing and the rest are used for trainingThis is iterated for all the instances and the errors are averagedLeave-one-out:AdvantageWe use larger training setsLeave-one-out:DisadvantageComputationally intensiveCannot be stratified
Other estimates0.632 BootstrapDataset of n samples is sampled n times, with replacements, to give another dataset with n instancesThere will be some repeated instances in the second setHere error is defined as:e = 0.632x(error in test instances) + 0.368x(error in training instances)
Comparing data mining methodsTill now we were dealing with performance predictionNow we will look at methods to compare algorithms, to see which one did betterWe cant directly use Error rate to predict which algorithm is better as the error rate might have been calculated on different data setsSo to compare algorithms we need some statistical testsWe use Student’s  t- test to do this. This test help us to figure out if the mean error of two algorithm are different or not for a given confidence level
Comparing data mining methodsWe will use paired t-test which is a slight modification of student’s t-testPaired t-testSuppose we have unlimited data, do the following:Find k data sets from the unlimited data we haveUse cross validation with each technique to get the respective outcomes: x1, x2, x3,….,xk and y1,y2,y3,……,ykmx = mean of x values and similarly mydi = xi – yiUsing t-statistic:
Comparing data mining methodsBased on the value of k we get a degree of freedom, which enables us to figure out a z for a particular confidence valueIf  t <= (-z)   or  t >= (z) then, the two means differ significantly In case t = 0 then they don’t differ, we call this null hypothesis
Predicting ProbabilitiesTill now we were considering a scheme which when applied, results in either a correct or an incorrect prediction. This is called 0 – loss functionNow we will deal with the success incase of algorithms that outputs probability distribution for e.g. Naïve Bayes
Predicting ProbabilitiesQuadratic loss function:For a single instance there are k out comes or classesProbability vector: p1,p2,….,pkThe actual out come vector is: a1,a2,a3,…..ak (where the actual outcome will be 1, rest all 0)We have to minimize the quadratic loss function given by:The minimum will be achieved when the probability vector is the true probability vector
Predicting ProbabilitiesInformational loss function:Given by:–log(pi)Minimum is again reached at true probabilitiesDifferences between Quadratic loss and Informational lossWhile quadratic loss takes all probabilities under consideration, Informational loss is based only on the class probability While quadratic loss is bounded as its maximum output is 2, Informational loss is unbounded as it can output values up to infinity
Counting the costDifferent outcomes might have different costFor example in loan decision, the cost of lending to a defaulter is far greater that the lost-business cost of  refusing a loan to a non defaulterSuppose we have two class prediction. Outcomes can be:
Counting the costTrue positive rate: TP/(TP+FN)False positive rate: FP/(FP+TN)Overall success rate: Number of correct classification / Total Number of classificationError rate = 1 – success rateIn multiclass case we have a confusion matrix like (actual and a random one):
Counting the costThese are the actual and the random outcome of a three class problemThe diagonal represents the successful casesKappa statistic = (D-observed  -  D-actual) / (D-perfect  -  D-actual)Here kappa statistic = (140 – 82)/(200-82) = 49.2%Kappa is used to measure the agreement between predicted and observed categorizations of a dataset, while correcting for agreements that occurs by chanceDoes not take cost into account
Classification with costsExample Cost matrices (just gives us the number of errors):Success rate is measured by average cost per predictionWe try to minimize the costsExpected costs: dot products of vectors of class probabilities and appropriate column in cost matrix
Classification with costsSteps to take cost into consideration while testing:First use a learning method to get the probability vector (like Naïve Bayes) Now multiple the probability vector to each column of a cost matrix one by one so as to get the cost for each class/columnSelect the class with the minimum(or maximum!!) cost
Cost sensitive learningTill now we included the cost factor during evaluationWe will incorporate costs into the learning phase of a methodWe can change the ratio of instances in the training set so as to take care of costsFor example we can do replication of a instances of particular class so that our learning method will give us a model with less errors of that class
Lift ChartsIn practice, costs are rarely knownIn marketing terminology the response rate is referred to as the lift factorWe compare probable scenarios to make decisionsA lift chart allows visual comparisonExample: promotional mail out to 1,000,000 householdsMail to all: 0.1%response (1000)Some data mining tool identifies subset of 100, 000 of which 0.4% respond (400)A lift of 4
Lift ChartsSteps to calculate lift factor:We decide a sample sizeNow we arrange our data in decreasing order of the predicted probability of a class (the one which we will base our lift factor on: positive class)We calculate:Sample success proportion = Number of positive instances / Sample size Lift factor = Sample success proportion / Data success proportionWe calculate lift factor for different sample size to get  Lift Charts
Lift ChartsA hypothetical lift chart
Lift ChartsIn the lift chart we will like to stay towards the upper left cornerThe diagonal line is the curve for random samples without using sorted dataAny good selection will keep the lift curve above the diagonal
ROC CurvesStands for receiver operating characteristicDifference to lift charts:Y axis showspercentage of true positive X axis shows percentage of false positives in samplesROC is a jagged curveIt can be smoothened out by cross validation
ROC CurvesA ROC curve
ROC CurvesWays to generate cost curves(Consider the previous diagram for reference)First way:Get the probability distribution over different folds of dataSort the data in decreasing order of the probability of yes classSelect a point on X-axis and for that number of no, get the number of yes for each probability distributionAverage the number of yes from all the folds and plot it
ROC CurvesSecond way:Get the probability distribution over different folds of dataSort the data in decreasing order of the probability of yes classSelect a point on X-axis and for that number of no, get the number of yes for each probability distributionPlot a ROC for each fold individually Average all the ROCs
ROC CurvesROC curves for two schemes
ROC CurvesIn the previous ROC curves:For a small, focused sample, use method AFor a large one, use method BIn between, choose between A and B with appropriate probabilities
Recall – precision curvesIn case of a search query:Recall = number of documents retrieved that are relevant / total number of documents that are relevantPrecision = number of documents retrieved that are relevant / total number of documents that are retrieved
A summary         Different measures used to evaluate the false positive versus the false negative tradeoff
Cost curvesCost curves plot expected costs directlyExample for case with uniform costs (i.e. error):
Cost curvesExample with costs:
Cost curvesC[+|-]  is the cost of predicting + when the instance is –C[-|+]  is the cost of predicting - when the instance is +
Minimum Description Length PrincipleThe description length is defined as:Space required to describe a theory + space required to describe the theory’s mistakesTheory  = Classifier and mistakes = errors on the training dataWe try to minimize the description lengthMDL theory is the one that compresses the data the most. I.e to compress a data set we generate a model and then store the model and its mistakesWe need to compute:Size of the modelSpace needed to encode the error
Minimum Description Length PrincipleThe 2nd  one is easy. Just use informational loss functionFor  1st  we need a method to encode the modelL[T] = “length” of the theoryL[E|T] = training set encoded wrt the theory
Minimum Description Length PrincipleMDL and clusteringDescription length of theory: bits needed to encode the clusters. E.g. cluster centersDescription length of data given theory: encode cluster membership and position relative to cluster. E.g. distance to cluster centersWorks if coding scheme uses less code space for small numbers than for large ones
Visit more self help tutorialsPick a tutorial of your choice and browse through it at your own pace.The tutorials section is free, self-guiding and will not involve any additional support.Visit us at www.dataminingtools.net

More Related Content

What's hot (20)

PDF
Machine Learning Algorithm - Decision Trees
Kush Kulshrestha
 
DOCX
Web service
Tonachi Shika
 
PPT
Xu lý tín hiệu số
Hao Truong
 
PDF
Giao trinh Toan roi rac2
Ngo Hung Long
 
PDF
Bài 5: Chuẩn hóa cơ sở dữ liệu
MasterCode.vn
 
PDF
Đề tài: Nghiên cứu kỹ thuật tấn công mạng LAN và giải pháp, HAY
Dịch vụ viết bài trọn gói ZALO 0917193864
 
PPTX
An Introduction to Regression Models: Linear and Logistic approaches
Bhanu Yadav
 
PDF
Day2 statistical tests
Nidhi Gogna Kasetwar
 
PPTX
K-Nearest Neighbor(KNN)
Abdullah al Mamun
 
PPTX
Vector space classification
Ujjawal
 
DOCX
Thuật toán Nhân Bình Phương - demo
Công Thắng Trương
 
PPTX
Scatterplots, Correlation, and Regression
Long Beach City College
 
PPTX
What's Significant? Hypothesis Testing, Effect Size, Confidence Intervals, & ...
Pat Barlow
 
DOCX
7. tìm hiểu hàm băm md5 và ứng dụng
Sai Lemovom
 
DOC
Hệ PhâN TáN
it
 
PPTX
Hệ mật mã Elgamal
Thành phố Đà Lạt
 
PPT
MATMA - Chuong2
Sai Lemovom
 
PPT
Bai giang atbmtt
Hà Vũ
 
PDF
Chapter 1 vietnamese [compatibility mode] Mang May tinh
Vu Van Tuu
 
PDF
Practice test ch 8 hypothesis testing ch 9 two populations
Long Beach City College
 
Machine Learning Algorithm - Decision Trees
Kush Kulshrestha
 
Web service
Tonachi Shika
 
Xu lý tín hiệu số
Hao Truong
 
Giao trinh Toan roi rac2
Ngo Hung Long
 
Bài 5: Chuẩn hóa cơ sở dữ liệu
MasterCode.vn
 
Đề tài: Nghiên cứu kỹ thuật tấn công mạng LAN và giải pháp, HAY
Dịch vụ viết bài trọn gói ZALO 0917193864
 
An Introduction to Regression Models: Linear and Logistic approaches
Bhanu Yadav
 
Day2 statistical tests
Nidhi Gogna Kasetwar
 
K-Nearest Neighbor(KNN)
Abdullah al Mamun
 
Vector space classification
Ujjawal
 
Thuật toán Nhân Bình Phương - demo
Công Thắng Trương
 
Scatterplots, Correlation, and Regression
Long Beach City College
 
What's Significant? Hypothesis Testing, Effect Size, Confidence Intervals, & ...
Pat Barlow
 
7. tìm hiểu hàm băm md5 và ứng dụng
Sai Lemovom
 
Hệ PhâN TáN
it
 
Hệ mật mã Elgamal
Thành phố Đà Lạt
 
MATMA - Chuong2
Sai Lemovom
 
Bai giang atbmtt
Hà Vũ
 
Chapter 1 vietnamese [compatibility mode] Mang May tinh
Vu Van Tuu
 
Practice test ch 8 hypothesis testing ch 9 two populations
Long Beach City College
 

Viewers also liked (20)

PPTX
WEKA: Practical Machine Learning Tools And Techniques
DataminingTools Inc
 
PPTX
WEKA: Introduction To Weka
DataminingTools Inc
 
PPTX
R Statistics
DataminingTools Inc
 
PPTX
LISP: Declarations In Lisp
DataminingTools Inc
 
PPTX
MS Sql Server: Doing Calculations With Functions
DataminingTools Inc
 
PDF
Cinnamonhotel saigon 2013_01
cinnamonhotel
 
XLSX
Test
spencer shanks
 
PPTX
Data Applied:Forecast
DataminingTools Inc
 
PPT
Facebook: An Innovative Influenza Pandemic Early Warning System
Chen Luo
 
PPTX
Control Statements in Matlab
DataminingTools Inc
 
PPTX
Introduction to Data-Applied
DataminingTools Inc
 
PPTX
Matlab Text Files
DataminingTools Inc
 
PPT
Mphone
msprincess915
 
PPTX
Clickthrough
dpapageorge
 
KEY
Kidical Mass Presentation
Eugene SRTS
 
PPTX
RapidMiner: Setting Up A Process
DataminingTools Inc
 
PPTX
Data Applied:Tree Maps
DataminingTools Inc
 
PPT
Wisconsin Fertility Institute: Injection Class 2011
WisFertility
 
PPTX
SPSS: File Managment
DataminingTools Inc
 
PPTX
LISP:Object System Lisp
DataminingTools Inc
 
WEKA: Practical Machine Learning Tools And Techniques
DataminingTools Inc
 
WEKA: Introduction To Weka
DataminingTools Inc
 
R Statistics
DataminingTools Inc
 
LISP: Declarations In Lisp
DataminingTools Inc
 
MS Sql Server: Doing Calculations With Functions
DataminingTools Inc
 
Cinnamonhotel saigon 2013_01
cinnamonhotel
 
Data Applied:Forecast
DataminingTools Inc
 
Facebook: An Innovative Influenza Pandemic Early Warning System
Chen Luo
 
Control Statements in Matlab
DataminingTools Inc
 
Introduction to Data-Applied
DataminingTools Inc
 
Matlab Text Files
DataminingTools Inc
 
Clickthrough
dpapageorge
 
Kidical Mass Presentation
Eugene SRTS
 
RapidMiner: Setting Up A Process
DataminingTools Inc
 
Data Applied:Tree Maps
DataminingTools Inc
 
Wisconsin Fertility Institute: Injection Class 2011
WisFertility
 
SPSS: File Managment
DataminingTools Inc
 
LISP:Object System Lisp
DataminingTools Inc
 
Ad

Similar to WEKA: Credibility Evaluating Whats Been Learned (20)

PDF
Assessing Model Performance - Beginner's Guide
Megan Verbakel
 
PDF
working with python
bhavesh lande
 
PPTX
Predictive analytics using 'R' Programming
ssusere796b3
 
PPTX
module_of_healthcare_wound_healing_mbbs_3.pptx
harshypate56l8155
 
PPTX
Machine learning session4(linear regression)
Abhimanyu Dwivedi
 
PDF
Resampling methods Cross Validation Bootstrap Bias and variance estimation...
ssuser1028f8
 
PPT
MLlectureMethod.ppt
butest
 
PPT
MLlectureMethod.ppt
butest
 
PPTX
Lecture 3.1_ Logistic Regression powerpoint
zahidwadiwale
 
PPTX
Lecture 3.1_ Logistic Regression.pptx
ajondaree
 
PDF
Understanding Blackbox Prediction via Influence Functions
SEMINARGROOT
 
PDF
Machine learning (5)
NYversity
 
PDF
13ClassifierPerformance.pdf
ssuserdce5c21
 
PPTX
PERFORMANCE_PREDICTION__PARAMETERS[1].pptx
TAHIRZAMAN81
 
PPTX
All PERFORMANCE PREDICTION PARAMETERS.pptx
taherzamanrather
 
PPTX
Cross Validation Cross ValidationmCross Validation.pptx
Nishant83346
 
PPTX
Py data19 final
Maria Navarro Jiménez
 
PDF
Supervised Learning.pdf
gadissaassefa
 
PPTX
Machine learning and_nlp
ankit_ppt
 
Assessing Model Performance - Beginner's Guide
Megan Verbakel
 
working with python
bhavesh lande
 
Predictive analytics using 'R' Programming
ssusere796b3
 
module_of_healthcare_wound_healing_mbbs_3.pptx
harshypate56l8155
 
Machine learning session4(linear regression)
Abhimanyu Dwivedi
 
Resampling methods Cross Validation Bootstrap Bias and variance estimation...
ssuser1028f8
 
MLlectureMethod.ppt
butest
 
MLlectureMethod.ppt
butest
 
Lecture 3.1_ Logistic Regression powerpoint
zahidwadiwale
 
Lecture 3.1_ Logistic Regression.pptx
ajondaree
 
Understanding Blackbox Prediction via Influence Functions
SEMINARGROOT
 
Machine learning (5)
NYversity
 
13ClassifierPerformance.pdf
ssuserdce5c21
 
PERFORMANCE_PREDICTION__PARAMETERS[1].pptx
TAHIRZAMAN81
 
All PERFORMANCE PREDICTION PARAMETERS.pptx
taherzamanrather
 
Cross Validation Cross ValidationmCross Validation.pptx
Nishant83346
 
Py data19 final
Maria Navarro Jiménez
 
Supervised Learning.pdf
gadissaassefa
 
Machine learning and_nlp
ankit_ppt
 
Ad

More from DataminingTools Inc (20)

PPTX
Terminology Machine Learning
DataminingTools Inc
 
PPTX
Techniques Machine Learning
DataminingTools Inc
 
PPTX
Machine learning Introduction
DataminingTools Inc
 
PPTX
Areas of machine leanring
DataminingTools Inc
 
PPTX
AI: Planning and AI
DataminingTools Inc
 
PPTX
AI: Logic in AI 2
DataminingTools Inc
 
PPTX
AI: Logic in AI
DataminingTools Inc
 
PPTX
AI: Learning in AI 2
DataminingTools Inc
 
PPTX
AI: Learning in AI
DataminingTools Inc
 
PPTX
AI: Introduction to artificial intelligence
DataminingTools Inc
 
PPTX
AI: Belief Networks
DataminingTools Inc
 
PPTX
AI: AI & Searching
DataminingTools Inc
 
PPTX
AI: AI & Problem Solving
DataminingTools Inc
 
PPTX
Data Mining: Text and web mining
DataminingTools Inc
 
PPTX
Data Mining: Outlier analysis
DataminingTools Inc
 
PPTX
Data Mining: Mining stream time series and sequence data
DataminingTools Inc
 
PPTX
Data Mining: Mining ,associations, and correlations
DataminingTools Inc
 
PPTX
Data Mining: Graph mining and social network analysis
DataminingTools Inc
 
PPTX
Data warehouse and olap technology
DataminingTools Inc
 
PPTX
Data Mining: Data processing
DataminingTools Inc
 
Terminology Machine Learning
DataminingTools Inc
 
Techniques Machine Learning
DataminingTools Inc
 
Machine learning Introduction
DataminingTools Inc
 
Areas of machine leanring
DataminingTools Inc
 
AI: Planning and AI
DataminingTools Inc
 
AI: Logic in AI 2
DataminingTools Inc
 
AI: Logic in AI
DataminingTools Inc
 
AI: Learning in AI 2
DataminingTools Inc
 
AI: Learning in AI
DataminingTools Inc
 
AI: Introduction to artificial intelligence
DataminingTools Inc
 
AI: Belief Networks
DataminingTools Inc
 
AI: AI & Searching
DataminingTools Inc
 
AI: AI & Problem Solving
DataminingTools Inc
 
Data Mining: Text and web mining
DataminingTools Inc
 
Data Mining: Outlier analysis
DataminingTools Inc
 
Data Mining: Mining stream time series and sequence data
DataminingTools Inc
 
Data Mining: Mining ,associations, and correlations
DataminingTools Inc
 
Data Mining: Graph mining and social network analysis
DataminingTools Inc
 
Data warehouse and olap technology
DataminingTools Inc
 
Data Mining: Data processing
DataminingTools Inc
 

Recently uploaded (20)

PDF
Upskill to Agentic Automation 2025 - Kickoff Meeting
DianaGray10
 
PPTX
Machine Learning Benefits Across Industries
SynapseIndia
 
PDF
Lecture A - AI Workflows for Banking.pdf
Dr. LAM Yat-fai (林日辉)
 
PDF
OpenInfra ID 2025 - Are Containers Dying? Rethinking Isolation with MicroVMs.pdf
Muhammad Yuga Nugraha
 
PDF
Arcee AI - building and working with small language models (06/25)
Julien SIMON
 
PDF
CloudStack GPU Integration - Rohit Yadav
ShapeBlue
 
PPT
Interview paper part 3, It is based on Interview Prep
SoumyadeepGhosh39
 
PPTX
Earn Agentblazer Status with Slack Community Patna.pptx
SanjeetMishra29
 
PDF
Sustainable and comertially viable mining process.pdf
Avijit Kumar Roy
 
PDF
The Past, Present & Future of Kenya's Digital Transformation
Moses Kemibaro
 
PPTX
Simplifying End-to-End Apache CloudStack Deployment with a Web-Based Automati...
ShapeBlue
 
PDF
2025-07-15 EMEA Volledig Inzicht Dutch Webinar
ThousandEyes
 
PDF
TrustArc Webinar - Data Privacy Trends 2025: Mid-Year Insights & Program Stra...
TrustArc
 
PPTX
UI5Con 2025 - Get to Know Your UI5 Tooling
Wouter Lemaire
 
PPTX
Building and Operating a Private Cloud with CloudStack and LINBIT CloudStack ...
ShapeBlue
 
PPTX
Building a Production-Ready Barts Health Secure Data Environment Tooling, Acc...
Barts Health
 
PPTX
Lecture 5 - Agentic AI and model context protocol.pptx
Dr. LAM Yat-fai (林日辉)
 
PDF
GITLAB-CICD_For_Professionals_KodeKloud.pdf
deepaktyagi0048
 
PDF
Meetup Kickoff & Welcome - Rohit Yadav, CSIUG Chairman
ShapeBlue
 
PPTX
TYPES OF COMMUNICATION Presentation of ICT
JulieBinwag
 
Upskill to Agentic Automation 2025 - Kickoff Meeting
DianaGray10
 
Machine Learning Benefits Across Industries
SynapseIndia
 
Lecture A - AI Workflows for Banking.pdf
Dr. LAM Yat-fai (林日辉)
 
OpenInfra ID 2025 - Are Containers Dying? Rethinking Isolation with MicroVMs.pdf
Muhammad Yuga Nugraha
 
Arcee AI - building and working with small language models (06/25)
Julien SIMON
 
CloudStack GPU Integration - Rohit Yadav
ShapeBlue
 
Interview paper part 3, It is based on Interview Prep
SoumyadeepGhosh39
 
Earn Agentblazer Status with Slack Community Patna.pptx
SanjeetMishra29
 
Sustainable and comertially viable mining process.pdf
Avijit Kumar Roy
 
The Past, Present & Future of Kenya's Digital Transformation
Moses Kemibaro
 
Simplifying End-to-End Apache CloudStack Deployment with a Web-Based Automati...
ShapeBlue
 
2025-07-15 EMEA Volledig Inzicht Dutch Webinar
ThousandEyes
 
TrustArc Webinar - Data Privacy Trends 2025: Mid-Year Insights & Program Stra...
TrustArc
 
UI5Con 2025 - Get to Know Your UI5 Tooling
Wouter Lemaire
 
Building and Operating a Private Cloud with CloudStack and LINBIT CloudStack ...
ShapeBlue
 
Building a Production-Ready Barts Health Secure Data Environment Tooling, Acc...
Barts Health
 
Lecture 5 - Agentic AI and model context protocol.pptx
Dr. LAM Yat-fai (林日辉)
 
GITLAB-CICD_For_Professionals_KodeKloud.pdf
deepaktyagi0048
 
Meetup Kickoff & Welcome - Rohit Yadav, CSIUG Chairman
ShapeBlue
 
TYPES OF COMMUNICATION Presentation of ICT
JulieBinwag
 

WEKA: Credibility Evaluating Whats Been Learned

  • 2. Training and TestingWe measure the success of a classification procedure by using error rates (or equivalent success rates)Measuring success rate using training set is highly optimisticThe error rate on training set is called resubstitution errorWe have a separate test set for calculating success errorTest set should be independent of the training setAlso some time to improve our classification technique we use a validation setWhen we hold out some part of training set for testing (which is now not used for training), this process is called holdout procedure
  • 3. Predicting performanceExpected success rate = 100 – error rate (If error rate is also in percentage)We want the true success rateCalculation of true success rateSuppose we have expected success rate(f) = s/n, where s is the number of success out of a total n instancesFor large value of n, f follows a normal distributionNow we will predict the true success rate (p) based on the confidence percentage we want For example say our f = 75%, then p will lie in [73.2%,76.7%] with 80% confidence
  • 4. Predicting performanceNow using properties of statistics we know that the mean of f is p and the variance is p(1-p)/nTo use normal distribution we will have to make the mean of f = 0 and standard deviation = 1 So suppose our confidence = c% and we want to calculate pWe will use the two tailed property of normal distributionAnd also that the are covered by normal distribution is taken as 100% so the are we will leave is 100 - c
  • 5. Predicting performanceFinally after all the manipulations we have ,true success rate as:Here, p -> true success rate f - > expected success rate N -> Number of instances Z -> Factor derived from a normal distribution table using the 100-c measure
  • 6. Cross validationWe use cross validation when amount of data is small and we need to have independent training and test set from itIt is important that each class is represented in its actual proportions in the training and test set: Stratification An important cross validation technique is stratified 10 fold cross validation, where the instance set is divided into 10 foldsWe have 10 iterations with taking a different single fold for testing and the rest 9 folds for training, averaging the error of the 10 iterationsProblem: Computationally intensive
  • 7. Other estimatesLeave-one-out:StepsOne instance is left for testing and the rest are used for trainingThis is iterated for all the instances and the errors are averagedLeave-one-out:AdvantageWe use larger training setsLeave-one-out:DisadvantageComputationally intensiveCannot be stratified
  • 8. Other estimates0.632 BootstrapDataset of n samples is sampled n times, with replacements, to give another dataset with n instancesThere will be some repeated instances in the second setHere error is defined as:e = 0.632x(error in test instances) + 0.368x(error in training instances)
  • 9. Comparing data mining methodsTill now we were dealing with performance predictionNow we will look at methods to compare algorithms, to see which one did betterWe cant directly use Error rate to predict which algorithm is better as the error rate might have been calculated on different data setsSo to compare algorithms we need some statistical testsWe use Student’s t- test to do this. This test help us to figure out if the mean error of two algorithm are different or not for a given confidence level
  • 10. Comparing data mining methodsWe will use paired t-test which is a slight modification of student’s t-testPaired t-testSuppose we have unlimited data, do the following:Find k data sets from the unlimited data we haveUse cross validation with each technique to get the respective outcomes: x1, x2, x3,….,xk and y1,y2,y3,……,ykmx = mean of x values and similarly mydi = xi – yiUsing t-statistic:
  • 11. Comparing data mining methodsBased on the value of k we get a degree of freedom, which enables us to figure out a z for a particular confidence valueIf t <= (-z) or t >= (z) then, the two means differ significantly In case t = 0 then they don’t differ, we call this null hypothesis
  • 12. Predicting ProbabilitiesTill now we were considering a scheme which when applied, results in either a correct or an incorrect prediction. This is called 0 – loss functionNow we will deal with the success incase of algorithms that outputs probability distribution for e.g. Naïve Bayes
  • 13. Predicting ProbabilitiesQuadratic loss function:For a single instance there are k out comes or classesProbability vector: p1,p2,….,pkThe actual out come vector is: a1,a2,a3,…..ak (where the actual outcome will be 1, rest all 0)We have to minimize the quadratic loss function given by:The minimum will be achieved when the probability vector is the true probability vector
  • 14. Predicting ProbabilitiesInformational loss function:Given by:–log(pi)Minimum is again reached at true probabilitiesDifferences between Quadratic loss and Informational lossWhile quadratic loss takes all probabilities under consideration, Informational loss is based only on the class probability While quadratic loss is bounded as its maximum output is 2, Informational loss is unbounded as it can output values up to infinity
  • 15. Counting the costDifferent outcomes might have different costFor example in loan decision, the cost of lending to a defaulter is far greater that the lost-business cost of refusing a loan to a non defaulterSuppose we have two class prediction. Outcomes can be:
  • 16. Counting the costTrue positive rate: TP/(TP+FN)False positive rate: FP/(FP+TN)Overall success rate: Number of correct classification / Total Number of classificationError rate = 1 – success rateIn multiclass case we have a confusion matrix like (actual and a random one):
  • 17. Counting the costThese are the actual and the random outcome of a three class problemThe diagonal represents the successful casesKappa statistic = (D-observed - D-actual) / (D-perfect - D-actual)Here kappa statistic = (140 – 82)/(200-82) = 49.2%Kappa is used to measure the agreement between predicted and observed categorizations of a dataset, while correcting for agreements that occurs by chanceDoes not take cost into account
  • 18. Classification with costsExample Cost matrices (just gives us the number of errors):Success rate is measured by average cost per predictionWe try to minimize the costsExpected costs: dot products of vectors of class probabilities and appropriate column in cost matrix
  • 19. Classification with costsSteps to take cost into consideration while testing:First use a learning method to get the probability vector (like Naïve Bayes) Now multiple the probability vector to each column of a cost matrix one by one so as to get the cost for each class/columnSelect the class with the minimum(or maximum!!) cost
  • 20. Cost sensitive learningTill now we included the cost factor during evaluationWe will incorporate costs into the learning phase of a methodWe can change the ratio of instances in the training set so as to take care of costsFor example we can do replication of a instances of particular class so that our learning method will give us a model with less errors of that class
  • 21. Lift ChartsIn practice, costs are rarely knownIn marketing terminology the response rate is referred to as the lift factorWe compare probable scenarios to make decisionsA lift chart allows visual comparisonExample: promotional mail out to 1,000,000 householdsMail to all: 0.1%response (1000)Some data mining tool identifies subset of 100, 000 of which 0.4% respond (400)A lift of 4
  • 22. Lift ChartsSteps to calculate lift factor:We decide a sample sizeNow we arrange our data in decreasing order of the predicted probability of a class (the one which we will base our lift factor on: positive class)We calculate:Sample success proportion = Number of positive instances / Sample size Lift factor = Sample success proportion / Data success proportionWe calculate lift factor for different sample size to get Lift Charts
  • 24. Lift ChartsIn the lift chart we will like to stay towards the upper left cornerThe diagonal line is the curve for random samples without using sorted dataAny good selection will keep the lift curve above the diagonal
  • 25. ROC CurvesStands for receiver operating characteristicDifference to lift charts:Y axis showspercentage of true positive X axis shows percentage of false positives in samplesROC is a jagged curveIt can be smoothened out by cross validation
  • 27. ROC CurvesWays to generate cost curves(Consider the previous diagram for reference)First way:Get the probability distribution over different folds of dataSort the data in decreasing order of the probability of yes classSelect a point on X-axis and for that number of no, get the number of yes for each probability distributionAverage the number of yes from all the folds and plot it
  • 28. ROC CurvesSecond way:Get the probability distribution over different folds of dataSort the data in decreasing order of the probability of yes classSelect a point on X-axis and for that number of no, get the number of yes for each probability distributionPlot a ROC for each fold individually Average all the ROCs
  • 29. ROC CurvesROC curves for two schemes
  • 30. ROC CurvesIn the previous ROC curves:For a small, focused sample, use method AFor a large one, use method BIn between, choose between A and B with appropriate probabilities
  • 31. Recall – precision curvesIn case of a search query:Recall = number of documents retrieved that are relevant / total number of documents that are relevantPrecision = number of documents retrieved that are relevant / total number of documents that are retrieved
  • 32. A summary Different measures used to evaluate the false positive versus the false negative tradeoff
  • 33. Cost curvesCost curves plot expected costs directlyExample for case with uniform costs (i.e. error):
  • 35. Cost curvesC[+|-] is the cost of predicting + when the instance is –C[-|+] is the cost of predicting - when the instance is +
  • 36. Minimum Description Length PrincipleThe description length is defined as:Space required to describe a theory + space required to describe the theory’s mistakesTheory = Classifier and mistakes = errors on the training dataWe try to minimize the description lengthMDL theory is the one that compresses the data the most. I.e to compress a data set we generate a model and then store the model and its mistakesWe need to compute:Size of the modelSpace needed to encode the error
  • 37. Minimum Description Length PrincipleThe 2nd one is easy. Just use informational loss functionFor 1st we need a method to encode the modelL[T] = “length” of the theoryL[E|T] = training set encoded wrt the theory
  • 38. Minimum Description Length PrincipleMDL and clusteringDescription length of theory: bits needed to encode the clusters. E.g. cluster centersDescription length of data given theory: encode cluster membership and position relative to cluster. E.g. distance to cluster centersWorks if coding scheme uses less code space for small numbers than for large ones
  • 39. Visit more self help tutorialsPick a tutorial of your choice and browse through it at your own pace.The tutorials section is free, self-guiding and will not involve any additional support.Visit us at www.dataminingtools.net