0% found this document useful (0 votes)
1 views

COEN413 Machine Learning-2

Data mining is the process of analyzing large datasets to discover patterns and extract useful information. It is essential due to the vast amounts of data generated in the digital age, which requires automated techniques for analysis. Various techniques such as classification, clustering, and association rule mining are employed to derive insights from data for commercial and scientific purposes.

Uploaded by

michael99josh
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

COEN413 Machine Learning-2

Data mining is the process of analyzing large datasets to discover patterns and extract useful information. It is essential due to the vast amounts of data generated in the digital age, which requires automated techniques for analysis. Various techniques such as classification, clustering, and association rule mining are employed to derive insights from data for commercial and scientific purposes.

Uploaded by

michael99josh
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 38

Data Mining

What is data mining?

• After years of data mining there is still no


unique answer to this question.
• A tentative definition:
Data mining is the use of efficient techniques for the
analysis of very large collections of data and the
extraction of useful and possibly unexpected patterns in
data.
Why do we need data mining?
• Really, really huge amounts of raw data!!
– In the digital age, TB of data is generated by the second
• Mobile devices, digital photographs, web documents.
• Facebook updates, Tweets, Blogs, User-generated
content
• Transactions, sensor data, surveillance data
• Queries, clicks, browsing
– Cheap storage has made possible to maintain this data
• Need to analyze the raw data to extract
knowledge
Why do we need data mining?
• “The data is the computer”
– Large amounts of data can be more powerful than complex algorithms and models
• Google has solved many Natural Language Processing problems, simply by looking at the
data
• Example: misspellings, synonyms
– Data is power!
• Today, the collected data is one of the biggest assets of an online company
– Query logs of Google
– The friendship and updates of Facebook
– Tweets and follows of Twitter
– Amazon transactions
– We need a way to harness the collective intelligence
So, what is Data? Attributes

Tid Refund Marital Taxable


• Collection of data objects and Status Income Cheat

their attributes 1 Yes Single 125K No


2 No Married 100K No
3 No Single 70K No
• An attribute is a property or
4 Yes Married 120K No
characteristic of an object
5 No Divorced 95K Yes
– Examples: eye color of a Objects
6 No Married 60K No
person, temperature, etc.
7 Yes Divorced 220K No
– Attribute is also known as
8 No Single 85K Yes
variable, field, characteristic,
9 No Married 75K No
or feature
10 No Single 90K Yes
• A collection of attributes 10

describe an object
Size: Number of objects
– Object is also known as Dimensionality: Number of attributes
record, point, case, sample, Sparsity: Number of populated
entity, or instance object-attribute pairs
Types of Attributes
• There are different types of attributes
– Categorical
• Examples: eye color, zip codes, words, rankings (e.g, good, fair, bad),
height in {tall, medium, short}
• Nominal (no order or comparison) vs Ordinal (order but not comparable)
– Numeric
• Examples: dates, temperature, time, length, value, count.
• Discrete (counts) vs Continuous (temperature)
• Special case: Binary attributes (yes/no, exists/not exists)
Types of data
• Numeric data: Each object is a point in a multidimensional space
• Categorical data: Each object is a vector of categorical values
• Set data: Each object is a set of values (with or without counts)
– Sets can also be represented as binary vectors, or vectors of counts
• Ordered sequences: Each object is an ordered sequence of values.
• Graph data: Web graph and HTML Links
What can you do with the data?
• Suppose you are a search engine and you have a
toolbar log consisting of
– pages browsed, Ad click prediction

– queries,
Query reformulations
– pages clicked,
– ads clicked
each with a user id and a timestamp. What information
would you like to get out of the data?
Why data mining?
• Commercial point of view
– Data has become the key competitive advantage of companies
• Examples: Facebook, Google, Amazon
– Being able to extract useful information out of the data is key for exploiting them commercially.
• Scientific point of view
– Scientists are at an unprecedented position where they can collect TB of information
• Examples: Sensor data, astronomy data, social network data, gene data
– We need the tools to analyze such data to get a better understanding of the world and advance
science
• Scale (in data size and feature dimension)
– Why not use traditional analytic methods?
– Enormity of data, curse of dimensionality
– The amount and the complexity of data does not allow for manual processing of the data. We need
automated techniques.
What is Data Mining again?
• “Data mining is the analysis of (often large) observational data sets to
find unsuspected relationships and to summarize the data in novel ways
that are both understandable and useful to the data analyst” (Hand,
Mannila, Smyth)

• “Data mining is the discovery of models for data” (Rajaraman, Ullman)


– We can have the following types of models
• Models that explain the data (e.g., a single function)
• Models that predict the future data instances.
• Models that summarize the data
• Models that extract the most prominent features of the data.
What can we do with data mining?
• Some examples:
– Frequent itemsets and Association Rules extraction
– Coverage
– Clustering
– Classification
– Ranking
– Exploratory analysis
Frequent Itemsets and Association Rules
• Given a set of records each of which contain some number of items from a given
collection;
– Identify sets of items (itemsets) occurring frequently
together
– Produce dependency rules which will predict
occurrence of an item based on occurrences of
other items.
Itemsets
ItemsetsDiscovered:
Discovered:
TID Items {Milk,Coke}
{Milk,Coke}
1 Bread, Coke, Milk {Diaper, Milk}
{Diaper, Milk}
2 Beer, Bread
3 Beer, Coke, Diaper, Milk Rules
RulesDiscovered:
Discovered:
4 Beer, Bread, Diaper, Milk {Milk} --> {Coke}
{Milk} --> {Coke}
{Diaper, Milk} --> {Beer}
5 Coke, Diaper, Milk {Diaper, Milk} --> {Beer}
Frequent Itemsets: Applications
• Text mining: finding associated phrases in text
– There are lots of documents that contain the phrases “association
rules”, “data mining” and “efficient algorithm”

• Recommendations:
– Users who buy this item often buy this item as well
– Users who watched James Bond movies, also watched Jason Bourne
movies.

– Recommendations make use of item and user similarity


Association Rule Discovery: Application

• Supermarket shelf management.


– Goal: To identify items that are bought together by
sufficiently many customers.
– Approach: Process the point-of-sale data collected with
barcode scanners to find dependencies among items.
– A classic rule --
• If a customer buys diaper and milk, then he is very likely to buy beer.
• So, don’t be surprised if you find six-packs stacked next to diapers!
Clustering Definition
• Given a set of data points, each having a set of
attributes, and a similarity measure among them, find
clusters such that
– Data points in one cluster are more similar to one another.
– Data points in separate clusters are less similar to one
another.
• Similarity Measures?
– Euclidean Distance if attributes are continuous.
– Other Problem-specific Measures.
Illustrating Clustering
Euclidean Distance Based Clustering in 3-D space.

Intracluster
Intraclusterdistances
distances Intercluster
Interclusterdistances
distances
are
areminimized
minimized are
aremaximized
maximized
Clustering: Application 1
• Bioinformatics applications:
– Goal: Group genes and tissues together such that genes are
coexpressed on the same tissues
Clustering: Application 2
• Document Clustering:
– Goal: To find groups of documents that are similar to each
other based on the important terms appearing in them.
– Approach: To identify frequently occurring terms in each
document. Form a similarity measure based on the
frequencies of different terms. Use it to cluster.
– Gain: Information Retrieval can utilize the clusters to relate
a new document or search term to clustered documents.
Classification: Definition
• Given a collection of records (training set )
– Each record contains a set of attributes, one of the
attributes is the class.
• Find a model for class attribute as a function of the
values of other attributes.

• Goal: previously unseen records should be assigned


a class as accurately as possible.
– A test set is used to determine the accuracy of the model.
Usually, the given data set is divided into training and test
sets, with training set used to build the model and test set
used to validate it.
Classification Example
l l
us
r ica r ica uo
o o
eg eg tin
t t n ss
ca ca co c la
Tid Refund Marital Taxable Refund Marital Taxable
Status Income Cheat Status Income Cheat

1 Yes Single 125K No No Single 75K ?


2 No Married 100K No Yes Married 50K ?
3 No Single 70K No No Married 150K ?
4 Yes Married 120K No Yes Divorced 90K ?
5 No Divorced 95K Yes No Single 40K ?
6 No Married 60K No No Married 80K ? Test
Set
10

7 Yes Divorced 220K No


8 No Single 85K Yes
9 No Married 75K No
Learn
Training
10
10 No Single 90K Yes
Set Classifier Model
Classification: Application 1
• Ad Click Prediction

– Goal: Predict if a user that visits a web page will


click on a displayed ad. Use it to target users
with high click probability.
– Approach:
• Collect data for users over a period of time and
record who clicks and who does not. The {click, no
click} information forms the class attribute.
• Use the history of the user (web pages browsed,
queries issued) as the features.
• Learn a classifier model and test on new users.
Classification: Application 2
• Fraud Detection
– Goal: Predict fraudulent cases in credit card
transactions.
– Approach:
• Use credit card transactions and the information on its
account-holder as attributes.
– When does a customer buy, what does he buy, how often he pays
on time, etc
• Label past transactions as fraud or fair transactions. This forms
the class attribute.
• Learn a model for the class of the transactions.
• Use this model to detect fraud by observing credit card
transactions on an account.
Link Analysis Ranking
• Given a collection of web pages that are linked to each
other, rank the pages according to importance
(authoritativeness) in the graph
– Intuition: A page gains authority if it is linked to by another
page.

• Application: When retrieving pages, the


authoritativeness is factored in the ranking.
Exploratory Analysis
• Trying to understand the data as a physical phenomenon, and describe
them with simple metrics
– What does the web graph look like?
– How often do people repeat the same query?
– Are friends in facebook also friends in twitter?

• The important thing is to find the right metrics and ask the right questions

• It helps our understanding of the world, and can lead to models of the
phenomena we observe.
Connections of Data Mining with other areas
• Draws ideas from machine learning/AI, pattern
recognition, statistics, and database systems
• Traditional Techniques
may be unsuitable due to Statistics/
AI
Machine Learning/
Pattern

– Enormity of data Recognition

– High dimensionality
Data Mining
of data
– Heterogeneous,
Database systems
distributed nature
of data
Cultures
• Databases: concentrate on large-scale (non-
main-memory) data.
• AI (machine-learning): concentrate on complex
methods, small data.
– In today’s world data is more important than
algorithms
• Statistics: concentrate on models.
26
Models vs. Analytic Processing
• To a database person, data-mining is
an extreme form of analytic processing
– queries that examine large amounts
of data.
– Result is the query answer.
• To a statistician, data-mining is the
inference of models.
– Result is the parameters of the model.
27
Data Mining: Confluence of Multiple Disciplines

Database
Technology Statistics

Machine Visualization
Data Mining
Learning

Pattern
Recognition Distributed
Algorithm Computing
Commodity Clusters
• Web data sets can be very large
– Tens to hundreds of terabytes
– Cannot mine on a single server
• Standard architecture emerging:
– Cluster of commodity Linux nodes, Gigabit ethernet interconnect
– Google GFS; Hadoop HDFS; Kosmix KFS
• Typical usage pattern
– Huge files (100s of GB to TB)
– Data is rarely updated in place
– Reads and appends are common
• How to organize computations on this architecture?
– Map-Reduce paradigm
Map-Reduce paradigm
• Map the data into key-value pairs
– E.g., map a document to word-count pairs
• Group by key
– Group all pairs of the same word, with lists of
counts
• Reduce by aggregating
– E.g. sum all the counts to produce the total count.
The data analysis pipeline
• Mining is not the only step in the analysis process

Result
Data Preprocessing Data Mining
Post-processing

• Preprocessing: real data is noisy, incomplete and inconsistent. Data cleaning is required to
make sense of the data
– Techniques: Sampling, Dimensionality Reduction, Feature selection.
– A dirty work, but it is often the most important step for the analysis.
• Post-Processing: Make the data actionable and useful to the user
– Statistical analysis of importance
– Visualization.
– Pre- and Post-processing are often data mining tasks as well
Data Quality
• Examples of data quality problems:
– Noise and outliers
– missing values
– duplicate data
Sampling
• Sampling is the main technique employed for data
selection.
– It is often used for both the preliminary investigation of
the data and the final data analysis.

• Statisticians sample because obtaining the entire


set of data of interest is too expensive or time
consuming.

• Sampling is used in data mining because


processing the entire set of data of interest is too
expensive or time consuming.
Sampling …
• The key principle for effective sampling is the
following:
– using a sample will work almost as well as using the
entire data sets, if the sample is representative

– A sample is representative if it has approximately the


same property (of interest) as the original set of data
Types of Sampling
• Simple Random Sampling
– There is an equal probability of selecting any particular item

• Sampling without replacement


– As each item is selected, it is removed from the population

• Sampling with replacement


– Objects are not removed from the population as they are selected for the sample.
• In sampling with replacement, the same object can be picked up more than once

• Stratified sampling
– Split the data into several partitions; then draw random samples from each partition
Sample Size

8000 points 2000 Points 500 Points


References
 Sebastian Raschka, STAT 479: Machine Learning, Fall 2018
http://stat.wisc.edu/~sraschka/teaching/stat479-fs2018/

 Andrew Ng, Machine Learning - Stanford University]


https://www.coursera.org/learn/machine-learning)

 6.036 Introduction to Machine Learning


(https://openlearninglibrary.mit.edu/courses/course-v1:MITx+6.036+1T2019/about)

You might also like