Unit III: Concept Description: Characterization and Comparison
Unit III: Concept Description: Characterization and Comparison
Concept description:
– can handle complex data types of the attributes and
their aggregations
– a more automated process
OLAP:
– restricted to a small number of dimension and
measure types
– user-controlled process
Data Generalization and Summarization-
based Characterization
Data generalization
– A process which abstracts a large set of task-relevant
data in a database from a low conceptual levels to
higher ones. 1
2
3
4
Conceptual levels
5
– Approaches:
Data cube approach(OLAP approach)
Implementation by Cube
Technology
Construct a data cube on-the-fly for the given data
mining query
– Facilitate efficient drill-down analysis
– May increase the response time
– A balanced solution: precomputation of “subprime” relation
Use a predefined & precomputed data cube
– Construct a data cube beforehand
– Facilitate not only the Attribute Oriented Induction(AOI), but
also Attribute Relevance Analysis (ARA), dicing, slicing, roll-
up and drill-down
– Cost of cube computation and the nontrivial storage overhead
Characterization vs. OLAP
Similarity:
– Presentation of data summarization at multiple levels of
abstraction.
– Interactive drilling, pivoting, slicing and dicing.
Differences:
– Automated desired level allocation.
– Dimension relevance analysis and ranking when there
are many relevant dimensions.
– Sophisticated typing on dimensions and measures.
– Analytical characterization: data dispersion analysis.
Characterization: Data Cube Approach
(without using Attribute Oriented-
Induction)
Perform computations and store results in data cubes
Strength
– An efficient implementation of data generalization
– Computation of various kinds of measures
e.g., count( ), sum( ), average( ), max( )
– Generalization and specialization can be performed on a data
cube by roll-up and drill-down
Limitations
– handle only dimensions of simple nonnumeric data and
measures of simple aggregated numeric values.
– Lack of intelligent analysis, can’t tell which dimensions should
be used and what levels should the generalization reach
Attribute-Oriented
Induction
Proposed in 1989 (KDD ‘89 workshop)
Not confined to categorical data nor particular
measures.
How it is done?
1. Collect the task-relevant data( initial relation) using a
relational database query
2. Perform generalization by attribute removal or
attribute generalization.
3. Apply aggregation by merging identical, generalized
tuples and accumulating their respective counts.
4. Interactive presentation with users.
Basic Principles of Attribute-
Oriented Induction
Data focusing: task-relevant data, including dimensions,
and the result is the initial relation.
Attribute-removal: remove attribute A if there is a large set
of distinct values for A but (1) there is no generalization
operator on A, or (2) A’s higher level concepts are
expressed in terms of other attributes.
Attribute-generalization: If there is a large set of distinct
values for A, and there exists a set of generalization
operators on A, then select an operator and generalize A.
Attribute-threshold control: typical 2-8, specified/default.
Generalized relation threshold control: control the final
relation/rule size.
1
Basic Algorithm for Attribute-
Oriented Induction
InitialRel: Query processing of task-relevant data, deriving
the initial relation.
PreGen: Based on the analysis of the number of distinct
values in each attribute, determine generalization plan for
each attribute: removal? or how high to generalize?
PrimeGen: Based on the PreGen plan, perform
generalization to the right level to derive a “prime
generalized relation”, accumulating the counts.
Presentation: User interaction: (1) adjust levels by drilling,
(2) pivoting, (3) mapping into rules, cross tabs,
visualization presentations.
1
Class Characterization: An Example
Name Gender Major Birth-Place Birth_date Residence Phone # GPA
Birth_Region
Canada Foreign Total
Gender
M 16 14 30
F 10 22 32
Total 26 36 62
1
Example
DMQL: Describe general characteristics of graduate
students in the Big-University database
use Big_University_DB
mine characteristics as “Science_Students”
in relevance to name, gender, major, birth_place, birth_date,
residence, phone#, gpa
from student
where status in “graduate”
Corresponding SQL statement:
Select name, gender, major, birth_place, birth_date, residence,
phone#, gpa
from student
where status in {“Msc”, “MBA”, “PhD” }
1
Presentation of Generalized Results
Generalized relation:
– Relations where some or all attributes are generalized, with counts
or other aggregation values accumulated.
Cross tabulation:
– Mapping results into cross tabulation form (similar to contingency
tables).
– Visualization techniques:
– Pie charts, bar charts, curves, cubes, and other visual forms.
Quantitative characteristic rules:
– Mapping generalized result into characteristic rules with
quantitative information associated with it, e.g.,
grad( x) male( x)
birth_ region( x) "Canada"[t :53%] birth_ region( x) " foreign"[t : 47%].
1
Presentation of Generalized Results
(continued)
t-weight:
– Interesting measure that describes the typicality of
each disjunct in the rule
each tuple in the corresponding generalized relation
n
1
Presentation—Crosstab
1
Attribute Relevance Analysis
Why?
– Which dimensions should be included?
– How high level of generalization?
– Automatic vs. interactive
– Reduce # attributes; easy to understand patterns
What?
– statistical method for preprocessing data
filter out irrelevant or weakly relevant attributes
retain or rank the relevant attributes
– relevance related to dimensions and levels
– analytical characterization, analytical comparison
1
Attribute relevance analysis (cont’d)
How?
1. Data Collection
2. Analytical Generalization
Use information gain analysis (e.g., entropy or other
measures) to identify highly relevant dimensions and
levels.
3. Relevance Analysis
Sort and select the most relevant dimensions and levels.
4. Attribute-oriented Induction for class description
On selected dimension/level
5. OLAP operations (e.g. drilling, slicing) on
relevance rules
1
Relevance Measures
Quantitative relevance measure determines the
classifying power of an attribute within a set of
data.
Methods
– information gain (ID3)
– gain ratio (C4.5)
– gini index
– (Chi-Square) 2 contingency table statistics
– uncertainty coefficient
2
Information-Theoretic Approach
Decision tree
– each internal node tests an attribute
– each branch corresponds to attribute value
– each leaf node assigns a classification
ID3 algorithm
– build decision tree based on training objects with
known class labels to classify testing objects
– rank attributes with information gain measure
– minimal height
the least number of tests to classify an object
2
Training Examples
ICS320
2
Top-Down Induction of Decision
Tree
Attributes = {Outlook, Temperature, Humidity, Wind}
PlayTennis = {yes, no}
Outlook
sunny overcast rain
no yes no yes
2
Entropy and Information Gain
2
Class Characterization: An Example
Name Gender Major Birth-Place Birth_date Residence Phone # GPA
Birth_Region
Canada Foreign Total
Gender
M 16 14 30
F 10 22 32
Total 26 36 62
2
Example: Analytical
Characterization
Task
– Mine general characteristics describing graduate
students using analytical characterization
Given
– attributes name, gender, major, birth_place, birth_date,
phone#, and gpa
– Gen(ai) = concept hierarchies on ai
– Ui = attribute analytical thresholds for ai
– Ti = attribute generalization thresholds for ai
– R = attribute relevance threshold
2
Example: Analytical
Characterization (cont’d)
1. Data collection
– target class: graduate student
– contrasting class: undergraduate student
2. Analytical generalization using Ui
– attribute removal
remove name and phone#
– attribute generalization
generalize major, birth_place, birth_date and gpa
accumulate counts
– candidate relation: gender, major, birth_country,
age_range and gpa
2
Example: Analytical characterization
(2)
gender major birth_country age_range gpa count
M Science Canada 20-25 Very_good 16
F Science Foreign 25-30 Excellent 22
M Engineering Foreign 25-30 Excellent 18
F Science Foreign 25-30 Excellent 25
M Science Canada 20-25 Excellent 21
F Engineering Canada 20-25 Excellent 18
84 84 42 42
I ( s11 , s 21 ) log 2 log 2 0.9183
126 126 126 126
2
Example: Analytical Characterization (4)
3
Example: Analytical comparison
Task
– Compare graduate and undergraduate students using
discriminant rule.
– DMQL query
use Big_University_DB
mine comparison as “grad_vs_undergrad_students”
in relevance to name, gender, major, birth_place, birth_date,
residence, phone#, gpa
for “graduate_students”
where status in “graduate”
versus “undergraduate_students”
where status in “undergraduate”
analyze count%
from student
3
Example: Analytical comparison (2)
Given
– attributes name, gender, major, birth_place,
birth_date, residence, phone# and gpa
– Gen(ai) = concept hierarchies on attributes ai
– Ui = attribute analytical thresholds for attributes ai
– Ti = attribute generalization thresholds for
attributes ai
– R = attribute relevance threshold
3
Example: Analytical comparison (3)
1. Data collection
– target and contrasting classes
3. Synchronous generalization
– controlled by user-specified dimension thresholds
– prime target and contrasting class(es) relations/cuboids
3
Example: Analytical comparison (4)
Birth_country Age_range Gpa Count%
Canada 20-25 Good 5.53%
Canada 25-30 Good 2.32%
Canada Over_30 Very_good 5.86%
… … … …
Other Over_30 Excellent 4.68%
Prime generalized relation for the target class: Graduate students
5. Presentation
– as generalized relations, crosstabs, bar charts, pie charts, or
rules
– contrasting measures to reflect comparison between target
and contrasting classes
e.g. count%
3
Quantitative Discriminant Rules
Cj = target class
qa = a generalized tuple covers some tuples of
class
– but can also cover some tuples of contrasting class
d-weight
– range: [0.0, 1.0] or [0%, 100%]
count(q a Cj )
d weight m
count(q
i 1
a Ci )
3
Example: Quantitative Description
Rule
Location/item TV Computer Both_items
Both_ 200 20% 100% 800 80% 100% 1000 100% 100%
regions
Crosstab showing associated t-weight, d-weight values and total number (in thousands) of TVs and
computers sold at AllElectronics in 1998
X, Europe(X)
(item(X) " TV" ) [t : 25%, d : 40%] (item(X) " computer" ) [t : 75%, d : 30%]
3
Quantitative Discriminant Rules
High d-weight in target class indicates that concept
represented by generalized tuple is primarily derived
from target class
Low d-weight implies concept is derived from
contrasting class
Threshold can be set to control the display of
interesting tuples
quantitative discriminant rule form
X, target_cla ss(X) condition( X) [d : d_weight]
4
Mining Data Dispersion
Characteristics
Motivation
– To better understand the data: central tendency, variation and
spread
Data dispersion characteristics
– median, max, min, quantiles, outliers, variance, etc.
Numerical dimensions correspond to sorted intervals
– Data dispersion: analyzed with multiple granularities of
precision
– Boxplot or quantile analysis on sorted intervals
Dispersion analysis on computed measures
– Folding measures into numerical dimensions
– Boxplot or quantile analysis on the transformed cube
4
Measuring the Central Tendency
1 n
Mean x xi n
n i 1 w x i i
– Weighted arithmetic mean x i 1
n
w i
Median: A holistic measure i 1
4
A Boxplot
4
Mining Descriptive Statistical Measures in
Large Databases
Variance
s ( xi x ) 2 xi2 xi
2 1 n 1 1 2
n i 1 n n
4
Histogram Analysis
4
Quantile Plot
Displays all of the data (allowing the user to assess both
the overall behavior and unusual occurrences)
Plots quantile information
– For a data xi data sorted in increasing order, fi indicates that
approximately 100 fi% of the data are below or equal to the
value xi
4
Quantile-Quantile (Q-Q) Plot
Graphs the quantiles of one univariate distribution
against the corresponding quantiles of another
Allows the user to view whether there is a shift in going
from one distribution to another
4
Scatter plot
5
Loess Curve
Adds a smooth curve to a scatter plot in order to
provide better perception of the pattern of dependence
Loess curve is fitted by setting two parameters: a
smoothing parameter, and the degree of the
polynomials that are fitted by the regression
5
Graphic Displays of Basic Statistical
Descriptions
Histogram: (shown before)
Boxplot: (covered before)
Quantile plot: each value xi is paired with fi indicating
that approximately 100 fi % of data are xi
Quantile-quantile (q-q) plot: graphs the quantiles of one
univariant distribution against the corresponding
quantiles of another
Scatter plot: each pair of values is a pair of coordinates
and plotted as points in the plane
Loess (local regression) curve: add a smooth curve to a
scatter plot to provide better perception of the pattern of
dependence
5
Data Mining System
Architectures
Coupling data mining system with DB/DW
system
– No coupling—flat file processing, not recommended
– Loose coupling
Fetching data from DB/DW