0% found this document useful (0 votes)
8 views

CH 5

Uploaded by

Revathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

CH 5

Uploaded by

Revathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 45

1

CS 43105 Data Mining Techniques


Chapter 5 Association Rule Mining
Xiang Lian
Department of Computer Science
Kent State University
Email: [email protected]
Homepage: http://www.cs.kent.edu/~xlian/
2

Outline
• What is association rule?
• Advanced Frequent Pattern Mining
• Mining Multi-Level Association
• Mining Multi-Dimensional Association
• Mining Quantitative Association Rules
• Mining Rare Patterns and Negative Patterns
3

Association Rule Mining


• Given a set of transactions, find rules that will predict
the occurrence of an item based on the occurrences
of other items in the transaction
Example of Association Rules
Market-Basket transactions
{Diaper}  {Beer},
TID Items {Beer, Bread}  {Milk},
1 Bread, Milk
2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke Implication means co-
4 Bread, Milk, Diaper, Beer occurrence, not causality!
5 Bread, Milk, Diaper, Coke
4

Definition: Association Rule


 Association Rule
 An implication expression of TID Items
the form X  Y, where X and 1 Bread, Milk
Y are itemsets 2 Bread, Diaper, Beer, Eggs
3 Milk, Diaper, Beer, Coke
 Example:
4 Bread, Milk, Diaper, Beer
{Milk, Diaper}  {Beer} 5 Bread, Milk, Diaper, Coke
 Rule Evaluation Metrics
 Support (s) Example:
{Milk , Diaper}  Beer
 Fraction of transactions that
contain both X and Y
 (Milk , Diaper, Beer ) 2
 Confidence (c = s (XY)/s (X)) s    0. 4
|T| 5
 Measures how often items in Y
appear in transactions that  (Milk, Diaper, Beer ) 2
c   0.67
contain X  (Milk , Diaper ) 3
5

Association Rule Mining Task


• Given a set of transactions T, the goal of
association rule mining is to find all rules having
• support ≥ minsup threshold
• confidence ≥ minconf threshold
• Brute-force approach:
• List all possible association rules
• Compute the support and confidence for each rule
• Prune rules that fail the minsup and minconf thresholds
 Computationally prohibitive!
6

Mining Association Rules


TID Items Example of Rules:
1 Bread, Milk
{Milk,Diaper}  {Beer} (s=0.4, c=0.67)
2 Bread, Diaper, Beer, Eggs
{Milk,Beer}  {Diaper} (s=0.4, c=1.0)
3 Milk, Diaper, Beer, Coke {Diaper,Beer}  {Milk} (s=0.4, c=0.67)
4 Bread, Milk, Diaper, Beer {Beer}  {Milk,Diaper} (s=0.4, c=0.67)
5 Bread, Milk, Diaper, Coke {Diaper}  {Milk,Beer} (s=0.4, c=0.5)
{Milk}  {Diaper,Beer} (s=0.4, c=0.5)
Observations:
• All the above rules are binary partitions of the same itemset:
{Milk, Diaper, Beer}
• Rules originating from the same itemset have identical support
but can have different confidence
• Thus, we may decouple the support and confidence
requirements
7

Mining Association Rules


• Two-step approach:
1. Frequent Itemset Generation
– Generate all itemsets whose support  minsup
2. Rule Generation
– Generate high confidence rules from each frequent itemset, where
each rule is a binary partitioning of a frequent itemset
• Frequent itemset generation is still computationally
expensive
8

Computational Complexity
• Given d unique items:
• Total number of itemsets = 2d
• Total number of possible association rules:

 d   d  k 
R        
d 1 d k

 k   j 
k 1 j 1

 3  2 1
d d 1

If d=6, R = 602 rules


9

Rule Generation
• Given a frequent itemset L, find all non-empty
subsets f  L such that f  L – f satisfies the
minimum confidence requirement
• If {A,B,C,D} is a frequent itemset, candidate rules:
ABC D, ABD C, ACD B, BCD A,
A BCD, B ACD, C ABD, D ABC
AB CD, AC  BD, AD  BC, BC AD,
BD AC, CD AB,

• If |L| = k, then there are 2 k – 2 candidate association


rules (ignoring L   and   L)
10

Rule Generation
• How to efficiently generate rules from frequent
itemsets?
• In general, confidence does not have an anti-monotone
property
c(ABC  D) can be larger or smaller than c(AB  D)
• But confidence of rules generated from the same
itemset has an anti-monotone property
• e.g., L = {A,B,C,D}:
c(ABC  D)  c(AB  CD)  c(A  BCD)
• Confidence is anti-monotone w.r.t. number of items on
the RHS of the rule
11

Rule Generation for Apriori Algorithm


Lattice of rules
ABCD=>{ }
Low
Confidence
Rule
BCD=>A ACD=>B ABD=>C ABC=>D

CD=>AB BD=>AC BC=>AD AD=>BC AC=>BD AB=>CD

D=>ABC C=>ABD B=>ACD A=>BCD


Pruned
Rules
12

Rule Generation for Apriori Algorithm


• Candidate rule is generated by merging two rules
that share the same prefix in the rule consequent
• join(CD=>AB,BD=>AC)
CD=>AB BD=>AC
would produce the candidate
rule D => ABC
• Prune rule D=>ABC if its
subset AD=>BC does not have
high confidence
D=>ABC
13

Beyond Itemsets
• Sequence Mining
• Finding frequent subsequences from a collection of
sequences
• Graph Mining
• Finding frequent (connected) subgraphs from a
collection of graphs
• Tree Mining
• Finding frequent (embedded) subtrees from a set of
trees/graphs
• Geometric Structure Mining
• Finding frequent substructures from 3-D or 2-D
geometric graphs
• Others…
Research on Pattern Mining: A Road Map
14
15

Advanced Frequent Pattern Mining


• Pattern Mining in Multi-Level, Multi-Dimensional
Space
• Mining Multi-Level Association
• Mining Multi-Dimensional Association
• Mining Quantitative Association Rules
• Mining Rare Patterns and Negative Patterns
• Constraint-Based Frequent Pattern Mining
16

Mining Multiple-Level Association Rules


• Items often form hierarchies
• Flexible support settings
• Items at the lower level are expected to have lower
support
• Exploration of shared multi-level mining (Agrawal &
Srikant@VLB’95, Han & Fu@VLDB’95)

uniform support reduced support


Level 1
Milk Level 1
min_sup = 5%
[support = 10%] min_sup = 5%

Level 2 2% Milk Skim Milk Level 2


min_sup = 5% [support = 6%] [support = 4%] min_sup = 3%
17

Multi-level Association: Flexible Support and


Redundancy filtering
• Flexible min-support thresholds: Some items are more
valuable but less frequent
• Use non-uniform, group-based min-support

• E.g., {diamond, watch, camera}: 0.05%; {bread, milk}: 5%; …

• Redundancy Filtering: Some rules may be redundant due


to “ancestor” relationships between items
• milk  wheat bread [support = 8%, confidence = 70%]

• 2% milk  wheat bread [support = 2%, confidence = 72%]

The first rule is an ancestor of the second rule


• A rule is redundant if its support is close to the “expected”
value, based on the rule’s ancestor
18

Mining Multi-Dimensional Association


• Single-dimensional rules:
buys(X, “milk”)  buys(X, “bread”)
• Multi-dimensional rules:  2 dimensions or predicates
• Inter-dimension assoc. rules (no repeated predicates)
age(X,”19-25”)  occupation(X,“student”)  buys(X, “coke”)
• hybrid-dimension assoc. rules (repeated predicates)
age(X,”19-25”)  buys(X, “popcorn”)  buys(X, “coke”)
• Categorical Attributes: finite number of possible values,
no ordering among values—data cube approach
• Quantitative Attributes: Numeric, implicit ordering
among values—discretization, clustering, and gradient
approaches
19

Mining Quantitative Associations


Techniques can be categorized by how numerical
attributes, such as age or salary are treated
1. Static discretization based on predefined concept
hierarchies (data cube methods)
2. Dynamic discretization based on data distribution
(quantitative rules, e.g., Agrawal & Srikant@SIGMOD96)
3. Clustering: Distance-based association (e.g., Yang &
Miller@SIGMOD97)
• One dimensional clustering then association
4. Deviation: (such as Aumann and Lindell@KDD99)
Sex = female => Wage: mean=$7/hr (overall mean = $9)
20

Static Discretization of Quantitative Attributes


 Discretized prior to mining using concept hierarchy.
 Numeric values are replaced by ranges
 In relational database, finding all frequent k-predicate sets
will require k or k+1 table scans
 Data cube is well suited for mining ()

 The cells of an n-dimensional


(age) (income) (buys)
cuboid correspond to the
predicate sets
(age, income) (age,buys) (income,buys)
 Mining from data cubes
can be much faster
(age,income,buys)
21

Quantitative Association Rules Based on Statistical Inference


Theory [Aumann and Lindell@DMKD’03]
• Finding extraordinary and therefore interesting phenomena, e.g.,
(Sex = female) => Wage: mean=$7/hr (overall mean = $9)
• LHS: a subset of the population
• RHS: an extraordinary behavior of this subset

• The rule is accepted only if a statistical test (e.g., Z-test) confirms


the inference with high confidence
• Subrule: highlights the extraordinary behavior of a subset of the
pop. of the super rule
• E.g., (Sex = female) ^ (South = yes) => mean wage = $6.3/hr

• Two forms of rules


• Categorical => quantitative rules, or Quantitative => quantitative rules
• E.g., Education in [14-18] (yrs) => mean wage = $11.64/hr

• Open problem: Efficient methods for LHS containing two or more


quantitative attributes
22

Negative and Rare Patterns


• Rare patterns: Very low support but interesting
• E.g., buying Rolex watches
• Mining: Setting individual-based or special group-based
support threshold for valuable items
• Negative patterns
• Since it is unlikely that one buys Ford Expedition (an SUV
car) and Toyota Prius (a hybrid car) together, Ford
Expedition and Toyota Prius are likely negatively
correlated patterns
• Negatively correlated patterns that are infrequent tend to be
more interesting than those that are frequent
23

Defining Negative Correlated Patterns (I)


• Definition 1 (support-based)
• If itemsets X and Y are both frequent but rarely occur together, i.e.,
sup(X U Y) < sup (X) * sup(Y)
• Then X and Y are negatively correlated
• Problem: A store sold two needle 100 packages A and B, only one
transaction containing both A and B.
• When there are in total 200 transactions, we have

s(A U B) = 0.005, s(A) * s(B) = 0.25, s(A U B) < s(A) * s(B)


• When there are 105 transactions, we have

s(A U B) = 1/105, s(A) * s(B) = 1/103 * 1/103, s(A U B) > s(A) * s(B)
• Where is the problem? —Null transactions, i.e., the support-based
definition is not null-invariant!

23
24

Defining Negative Correlated Patterns (II)


• Definition 2 (negative itemset-based)
• X is a negative itemset if (1) X = Ā U B, where B is a set of positive
items, and Ā is a set of negative items, |Ā|≥ 1, and (2) s(X) ≥ μ
• Itemsets X is negatively correlated, if

• This definition suffers a similar null-invariant problem


• Definition 3 (Kulzynski measure-based) If itemsets X and Y are frequent,
but (P(X|Y) + P(Y|X))/2 < є, where є is a negative pattern threshold, then
X and Y are negatively correlated.
• Ex. For the same needle package problem, when no matter there are 200
or 105 transactions, if є = 0.01, we have
(P(A|B) + P(B|A))/2 = (0.01 + 0.01)/2 < є
24
25

Constraint-based (Query-Directed) Mining


• Finding all the patterns in a database autonomously? —
unrealistic!
• The patterns could be too many but not focused!

• Data mining should be an interactive process


• User directs what to be mined using a data mining query language
(or a graphical user interface)
• Constraint-based mining
• User flexibility: provides constraints on what to be mined
• Optimization: explores such constraints for efficient mining —
constraint-based mining: constraint-pushing, similar to push
selection first in DB query processing
• Note: still find all the answers satisfying constraints, not finding some
answers in “heuristic search”
26

Constraints in Data Mining


• Knowledge type constraint:
• classification, association, etc.
• Data constraint — using SQL-like queries
• find product pairs sold together in stores in Chicago this
year
• Dimension/level constraint
• in relevance to region, price, brand, customer category
• Rule (or pattern) constraint
• small sales (price < $10) triggers big sales (sum > $200)
• Interestingness constraint
• strong rules: min_support  3%, min_confidence  60%
27

Meta-Rule Guided Mining


• Meta-rule can be in the rule form with partially instantiated
predicates and constants
P1(X, Y) ^ P2(X, W) => buys(X, “iPad”)

• The resulting rule derived can be


age(X, “15-25”) ^ profession(X, “student”) => buys(X, “iPad”)
• In general, it can be in the form of
P1 ^ P2 ^ … ^ Pl => Q1 ^ Q2 ^ … ^ Qr

• Method to find meta-rules


• Find frequent (l+r) predicates (based on min-support threshold)

• Push constants deeply when possible into the mining process (see
the remaining discussions on constraint-push techniques)
• Use confidence, correlation, and other measures when possible
28

Pattern Space Pruning with Anti-Monotonicity


Constraints
• A constraint C is anti-monotone if the super TDB (min_sup=2)
pattern satisfies C, all of its sub-patterns do TID Transaction
so too 10 a, b, c, d, f
• In other words, anti-monotonicity: If an 20 b, c, d, f, g, h
30 a, c, d, e, f
itemset S violates the constraint, so does
40 c, e, f, g
any of its superset
Item Profit
• Ex. 1. sum(S.price)  v is anti-monotone
a 40
• Ex. 2. range(S.profit)  15 is anti-monotone
b 0
• Itemset ab violates C
c -20
• So does every superset of ab
d 10
• Ex. 3. sum(S.Price)  v is not anti- e -30
monotone f 30
• Ex. 4. support count is anti-monotone: g 20
h -10
29

Pattern Space Pruning with Monotonicity Constraints


TDB (min_sup=2)
• A constraint C is monotone if the pattern
TID Transaction
satisfies C, we do not need to check C in 10 a, b, c, d, f
subsequent mining 20 b, c, d, f, g, h

• Alternatively, monotonicity: If an itemset 30 a, c, d, e, f


40 c, e, f, g
S satisfies the constraint, so does any of
Item Profit
its superset
a 40
• Ex. 1. sum(S.Price)  v is monotone b 0
• Ex. 2. min(S.Price)  v is monotone c -20
d 10
• Ex. 3. C: range(S.profit)  15
e -30
• Itemset ab satisfies C f 30
• So does every superset of ab g 20
h -10
30

Data Space Pruning with Data Anti-monotonicity


• A constraint c is data anti-monotone if for a TDB (min_sup=2)
pattern p cannot satisfy a transaction t TID Transaction

under c, p’s superset cannot satisfy t under 10 a, b, c, d, f, h

c either 20 b, c, d, f, g, h
30 b, c, d, f, g
• The key for data anti-monotone is recursive
40 c, e, f, g
data reduction Item Profit
• Ex. 1. sum(S.Price)  v is data anti-monotone a 40
• Ex. 2. min(S.Price)  v is data anti-monotone b 0
• Ex. 3. C: range(S.profit)  25 is data anti-monotone c -20
• Itemset {b, c}’s projected DB: d -15

• T10’: {d, f, h}, T20’: {d, f, g, h}, T30’: {d, f, g}


e -30
f -10
• since C cannot satisfy T10’, T10’ can be pruned
g 20
h -5
31

Convertible Constraints: Ordering Data in


Transactions TDB (min_sup=2)
TID Transaction
• Convert tough constraints into anti-
10 a, b, c, d, f
monotone or monotone by properly 20 b, c, d, f, g, h
ordering items 30 a, c, d, e, f
40 c, e, f, g
• Examine C: avg(S.profit)  25
Item Profit
• Order items in value-descending order
a 40
• <a, f, g, d, b, h, c, e> b 0
• If an itemset afb violates C c -20
• So does afbh, afb* d 10
• It becomes anti-monotone! e -30
f 30
g 20
h -10
32

Handling Multiple Constraints

• Different constraints may require different or


even conflicting item-ordering
• If there exists conflict on order of items
• Try to satisfy one constraint first
• Then using the order for the other constraint to mine
frequent itemsets in the corresponding projected
database
33

Ref: Mining Multi-Level and Quantitative Rules

• Y. Aumann and Y. Lindell. A Statistical Theory for Quantitative Association


Rules, KDD'99
• T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Data mining using
two-dimensional optimized association rules: Scheme, algorithms, and
visualization. SIGMOD'96.
• J. Han and Y. Fu. Discovery of multiple-level association rules from large
databases. VLDB'95.
• R.J. Miller and Y. Yang. Association rules over interval data. SIGMOD'97.
• R. Srikant and R. Agrawal. Mining generalized association rules. VLDB'95.
• R. Srikant and R. Agrawal. Mining quantitative association rules in large
relational tables. SIGMOD'96.
• K. Wang, Y. He, and J. Han. Mining frequent itemsets using support
constraints. VLDB'00
• K. Yoda, T. Fukuda, Y. Morimoto, S. Morishita, and T. Tokuyama. Computing
optimized rectilinear regions for association rules. KDD'97.
34

Ref: Mining Other Kinds of Rules


• F. Korn, A. Labrinidis, Y. Kotidis, and C. Faloutsos. Ratio rules: A new
paradigm for fast, quantifiable data mining. VLDB'98
• Y. Huhtala, J. Kärkkäinen, P. Porkka, H. Toivonen. Efficient Discovery of
Functional and Approximate Dependencies Using Partitions. ICDE’98.
• H. V. Jagadish, J. Madar, and R. Ng. Semantic Compression and Pattern
Extraction with Fascicles. VLDB'99
• B. Lent, A. Swami, and J. Widom. Clustering association rules. ICDE'97.
• R. Meo, G. Psaila, and S. Ceri. A new SQL-like operator for mining
association rules. VLDB'96.
• A. Savasere, E. Omiecinski, and S. Navathe. Mining for strong negative
associations in a large database of customer transactions. ICDE'98.
• D. Tsur, J. D. Ullman, S. Abitboul, C. Clifton, R. Motwani, and S. Nestorov.
Query flocks: A generalization of association-rule mining. SIGMOD'98.
35

Ref: Constraint-Based Pattern Mining


• R. Srikant, Q. Vu, and R. Agrawal. Mining association rules with item
constraints. KDD'97
• R. Ng, L.V.S. Lakshmanan, J. Han & A. Pang. Exploratory mining and pruning
optimizations of constrained association rules. SIGMOD’98
• G. Grahne, L. Lakshmanan, and X. Wang. Efficient mining of constrained
correlated sets. ICDE'00
• J. Pei, J. Han, and L. V. S. Lakshmanan. Mining Frequent Itemsets with
Convertible Constraints. ICDE'01
• J. Pei, J. Han, and W. Wang, Mining Sequential Patterns with Constraints in
Large Databases, CIKM'02
• F. Bonchi, F. Giannotti, A. Mazzanti, and D. Pedreschi. ExAnte: Anticipated
Data Reduction in Constrained Pattern Mining, PKDD'03
• F. Zhu, X. Yan, J. Han, and P. S. Yu, “gPrune: A Constraint Pushing Framework
for Graph Pattern Mining”, PAKDD'07
36

Ref: Mining Sequential Patterns


• X. Ji, J. Bailey, and G. Dong. Mining minimal distinguishing subsequence patterns with
gap constraints. ICDM'05
• H. Mannila, H Toivonen, and A. I. Verkamo. Discovery of frequent episodes in event
sequences. DAMI:97.
• J. Pei, J. Han, H. Pinto, Q. Chen, U. Dayal, and M.-C. Hsu. PrefixSpan: Mining Sequential
Patterns Efficiently by Prefix-Projected Pattern Growth. ICDE'01.
• R. Srikant and R. Agrawal. Mining sequential patterns: Generalizations and performance
improvements. EDBT’96.
• X. Yan, J. Han, and R. Afshar. CloSpan: Mining Closed Sequential Patterns in Large
Datasets. SDM'03.
• M. Zaki. SPADE: An Efficient Algorithm for Mining Frequent Sequences. Machine
Learning:01.
37

Mining Graph and Structured Patterns


• A. Inokuchi, T. Washio, and H. Motoda. An apriori-based algorithm for
mining frequent substructures from graph data. PKDD'00
• M. Kuramochi and G. Karypis. Frequent Subgraph Discovery. ICDM'01.
• X. Yan and J. Han. gSpan: Graph-based substructure pattern mining.
ICDM'02
• X. Yan and J. Han. CloseGraph: Mining Closed Frequent Graph Patterns.
KDD'03
• X. Yan, P. S. Yu, and J. Han. Graph indexing based on discriminative frequent
structure analysis. ACM TODS, 30:960–993, 2005
• X. Yan, F. Zhu, P. S. Yu, and J. Han. Feature-based substructure similarity
search. ACM Trans. Database Systems, 31:1418–1453, 2006
38

Ref: Mining Spatial, Spatiotemporal, Multimedia Data

• H. Cao, N. Mamoulis, and D. W. Cheung. Mining frequent spatiotemporal


sequential patterns. ICDM'05
• D. Gunopulos and I. Tsoukatos. Efficient Mining of Spatiotemporal Patterns.
SSTD'01
• K. Koperski and J. Han, Discovery of Spatial Association Rules in Geographic
Information Databases, SSD’95
• H. Xiong, S. Shekhar, Y. Huang, V. Kumar, X. Ma, and J. S. Yoo. A framework for
discovering co-location patterns in data sets with extended spatial objects.
SDM'04
• J. Yuan, Y. Wu, and M. Yang. Discovery of collocation patterns: From visual
words to visual phrases. CVPR'07
• O. R. Zaiane, J. Han, and H. Zhu, Mining Recurrent Items in Multimedia with
Progressive Resolution Refinement. ICDE'00
39

Ref: Mining Frequent Patterns in Time-Series Data

• B. Ozden, S. Ramaswamy, and A. Silberschatz. Cyclic association rules. ICDE'98.


• J. Han, G. Dong and Y. Yin, Efficient Mining of Partial Periodic Patterns in Time Series
Database, ICDE'99.
• J. Shieh and E. Keogh. iSAX: Indexing and mining terabyte sized time series. KDD'08
• B.-K. Yi, N. Sidiropoulos, T. Johnson, H. V. Jagadish, C. Faloutsos, and A. Biliris. Online
Data Mining for Co-Evolving Time Sequences. ICDE'00.
• W. Wang, J. Yang, R. Muntz. TAR: Temporal Association Rules on Evolving Numerical
Attributes. ICDE’01.
• J. Yang, W. Wang, P. S. Yu. Mining Asynchronous Periodic Patterns in Time Series Data.
TKDE’03
• L. Ye and E. Keogh. Time series shapelets: A new primitive for data mining. KDD'09
40

Ref: FP for Classification and Clustering


• G. Dong and J. Li. Efficient mining of emerging patterns: Discovering
trends and differences. KDD'99.
• B. Liu, W. Hsu, Y. Ma. Integrating Classification and Association Rule
Mining. KDD’98.
• W. Li, J. Han, and J. Pei. CMAR: Accurate and Efficient Classification Based
on Multiple Class-Association Rules. ICDM'01.
• H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in
large data sets. SIGMOD’ 02.
• J. Yang and W. Wang. CLUSEQ: efficient and effective sequence clustering.
ICDE’03.
• X. Yin and J. Han. CPAR: Classification based on Predictive Association
Rules. SDM'03.
• H. Cheng, X. Yan, J. Han, and C.-W. Hsu, Discriminative Frequent Pattern
Analysis for Effective Classification”, ICDE'07
41

Ref: Privacy-Preserving FP Mining

• A. Evfimievski, R. Srikant, R. Agrawal, J. Gehrke. Privacy Preserving Mining


of Association Rules. KDD’02.
• A. Evfimievski, J. Gehrke, and R. Srikant. Limiting Privacy Breaches in
Privacy Preserving Data Mining. PODS’03
• J. Vaidya and C. Clifton. Privacy Preserving Association Rule Mining in
Vertically Partitioned Data. KDD’02
42

Mining Compressed Patterns


• D. Xin, H. Cheng, X. Yan, and J. Han. Extracting redundancy-aware
top-k patterns. KDD'06
• D. Xin, J. Han, X. Yan, and H. Cheng. Mining compressed frequent-
pattern sets. VLDB'05
• X. Yan, H. Cheng, J. Han, and D. Xin. Summarizing itemset
patterns: A profile-based approach. KDD'05
43

Mining Colossal Patterns


• F. Zhu, X. Yan, J. Han, P. S. Yu, and H. Cheng. Mining colossal
frequent patterns by core pattern fusion. ICDE'07
• F. Zhu, Q. Qu, D. Lo, X. Yan, J. Han. P. S. Yu, Mining Top-K Large
Structural Patterns in a Massive Network. VLDB’11
44

Ref: FP Mining from Data Streams


• Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang. Multi-Dimensional
Regression Analysis of Time-Series Data Streams. VLDB'02.
• R. M. Karp, C. H. Papadimitriou, and S. Shenker. A simple algorithm for
finding frequent elements in streams and bags. TODS 2003.
• G. Manku and R. Motwani. Approximate Frequency Counts over Data
Streams. VLDB’02.
• A. Metwally, D. Agrawal, and A. El Abbadi. Efficient computation of frequent
and top-k elements in data streams. ICDT'05
45

Ref: Freq. Pattern Mining Applications

• T. Dasu, T. Johnson, S. Muthukrishnan, and V. Shkapenyuk. Mining Database Structure; or


How to Build a Data Quality Browser. SIGMOD'02
• M. Khan, H. Le, H. Ahmadi, T. Abdelzaher, and J. Han. DustMiner: Troubleshooting
interactive complexity bugs in sensor networks., SenSys'08
• Z. Li, S. Lu, S. Myagmar, and Y. Zhou. CP-Miner: A tool for finding copy-paste and related
bugs in operating system code. In Proc. 2004 Symp. Operating Systems Design and
Implementation (OSDI'04)
• Z. Li and Y. Zhou. PR-Miner: Automatically extracting implicit programming rules and
detecting violations in large software code. FSE'05
• D. Lo, H. Cheng, J. Han, S. Khoo, and C. Sun. Classification of software behaviors for failure
detection: A discriminative pattern mining approach. KDD'09
• Q. Mei, D. Xin, H. Cheng, J. Han, and C. Zhai. Semantic annotation of frequent patterns.
ACM TKDD, 2007.
• K. Wang, S. Zhou, J. Han. Profit Mining: From Patterns to Actions. EDBT’02.

You might also like