0% found this document useful (0 votes)
7 views

Unit 3

The Apriori algorithm uses a "bottom-up" approach to generate frequent itemsets, where it first finds all frequent individual items and then uses these to generate candidate itemsets of length 2. It then scans the database to determine the support of the candidates and generates length 3 candidates, and so on, until no new candidates are generated. It relies on the downward closure property to prune the search space, where any subset of a frequent itemset must also be frequent.

Uploaded by

Lakshay Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Unit 3

The Apriori algorithm uses a "bottom-up" approach to generate frequent itemsets, where it first finds all frequent individual items and then uses these to generate candidate itemsets of length 2. It then scans the database to determine the support of the candidates and generates length 3 candidates, and so on, until no new candidates are generated. It relies on the downward closure property to prune the search space, where any subset of a frequent itemset must also be frequent.

Uploaded by

Lakshay Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 62

Unit 3

• Data mining knowledge representation ,


• Task relevant data ,
• Background knowledge ,
• Interestingness measures ,
• Representing input data and output knowledge
• Visualization techniques
• Attribute-oriented analysis
• Attribute generalization

Contents • Attribute relevance


• Class comparison
• Statistical measures
• Data mining algorithms: Association rules ,
• Motivation and terminology
• Example: mining weather data ,
• Basic idea: item sets ,
• Generating item sets and rules efficiently,
• Correlation analysis
Frequent pattern: a pattern (a set of items, subsequences,
substructures, etc.) that occurs frequently in a data set

First proposed by Agrawal, Imielinski, and Swami [AIS93] in


What Is the context of frequent itemsets and association rule mining

Frequent Motivation: Finding inherent regularities in data

Pattern •

What products were often purchased together?— Beer and diapers?!
What are the subsequent purchases after buying a PC?
Analysis? •

What kinds of DNA are sensitive to this new drug?
Can we automatically classify web documents?

Applications

• Basket data analysis, cross-marketing, catalog design, sale campaign


analysis, Web log (click stream) analysis, and DNA sequence analysis.
Freq. pattern: An intrinsic and important
property of datasets

Foundation for many essential data mining


Why Is Freq. tasks

Pattern Mining • Association, correlation, and causality analysis


• Sequential, structural (e.g., sub-graph) patterns
Important? • Pattern analysis in spatiotemporal, multimedia, time-
series, and stream data
• Classification: discriminative, frequent pattern analysis
• Cluster analysis: frequent pattern-based clustering
• Data warehousing: iceberg cube and cube-gradient
• Semantic data compression: fascicles
• Broad applications
Basic Concepts: Frequent Patterns
Tid Items bought • itemset: A set of one or more items
10 Beer, Nuts, Diaper
20 Beer, Coffee, Diaper
• k-itemset X = {x1, …, xk}
30 Beer, Diaper, Eggs • (absolute) support, or, support count of X:
40 Nuts, Eggs, Milk Frequency or occurrence of an itemset X
50 Nuts, Coffee, Diaper, Eggs, Milk
• (relative) support, s, is the fraction of
Customer Customer transactions that contains X (i.e., the
buys both buys diaper probability that a transaction contains X)
• An itemset X is frequent if X’s support is
no less than a minsup threshold

Customer
buys beer
Basic Concepts: Association Rules
Tid Items bought • Find all the rules X → Y with minimum
10 Beer, Nuts, Diaper support and confidence
20 Beer, Coffee, Diaper
• support, s, probability that a transaction
30 Beer, Diaper, Eggs
contains X  Y
40 Nuts, Eggs, Milk
50 Nuts, Coffee, Diaper, Eggs, Milk • confidence, c, conditional probability that
Customer
a transaction having X also contains Y
Customer
buys both
buys Let minsup = 50%, minconf = 50%
diaper Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3, {Beer, Diaper}:3

◼ Association rules: (many more!)


Customer ◼ Beer → Diaper (60%, 100%)
buys beer ◼ Diaper → Beer (60%, 75%)
Closed Patterns and Max-Patterns
• A long pattern contains a combinatorial number of sub-patterns, e.g., {a1, …, a100}
contains (1001) + (1002) + … + (110000) = 2100 – 1 = 1.27*1030 sub-patterns!
• Solution: Mine closed patterns and max-patterns instead
• An itemset X is closed if X is frequent and there exists no super-pattern Y ‫ כ‬X, with
the same support as X (proposed by Pasquier, et al. @ ICDT’99)
• An itemset X is a max-pattern if X is frequent and there exists no frequent super-
pattern Y ‫ כ‬X (proposed by Bayardo @ SIGMOD’98)
• Closed pattern is a lossless compression of freq. patterns
• Reducing the # of patterns and rules
Closed Patterns and Max-Patterns
• Exercise. DB = {<a1, …, a100>, < a1, …, a50>}
• Min_sup = 1.
• What is the set of closed itemset?
• <a1, …, a100>: 1
• < a1, …, a50>: 2
• What is the set of max-pattern?
• <a1, …, a100>: 1
• What is the set of all patterns?
• !!
Computational Complexity of Frequent Itemset Mining

• How many itemsets are potentially to be generated in the worst case?


• The number of frequent itemsets to be generated is senstive to the minsup threshold
• When minsup is low, there exist potentially an exponential number of frequent itemsets
• The worst case: MN where M: # distinct items, and N: max length of transactions
• The worst case complexty vs. the expected probability
• Ex. Suppose Walmart has 104 kinds of products
• The chance to pick up one product 10-4
• The chance to pick up a particular set of 10 products: ~10-40
• What is the chance this particular set of 10 products to be frequent 103 times in 109
transactions?
The Downward Closure Property and Scalable Mining
Methods

The downward closure property of frequent patterns

• Any subset of a frequent itemset must be frequent


• If {beer, diaper, nuts} is frequent, so is {beer, diaper}
• i.e., every transaction having {beer, diaper, nuts} also contains {beer, diaper}

Scalable mining methods: Three major approaches

• Apriori (Agrawal & Srikant@VLDB’94)


• Freq. pattern growth (FPgrowth—Han, Pei & Yin @SIGMOD’00)
• Vertical data format approach (Charm—Zaki & Hsiao @SDM’02)
Apriori: A Candidate Generation & Test Approach

• Apriori pruning principle: If there is any itemset which is infrequent, its


superset should not be generated/tested!
• Method:
• Initially, scan DB once to get frequent 1-itemset
• Generate length (k+1) candidate itemsets from length k frequent itemsets
• Test the candidates against DB
• Terminate when no frequent or candidate set can be generated
The Apriori Algorithm—An Example
Supmin = 2 Itemset sup
Itemset sup
Database TDB {A} 2
Tid Items
L1 {A} 2
C1 {B} 3
{B} 3
10 A, C, D {C} 3
1st scan {C} 3
20 B, C, E {D} 1
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
{A, B} 1
L2 Itemset sup 2nd scan {A, B}
{A, C} 2
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2
{B, C} 2 {A, E}
{B, E} 3
{B, E} 3 {B, C}
{C, E} 2
{C, E} 2 {B, E}
{C, E}

C3 Itemset L3 Itemset sup


3rd scan
{B, C, E} {B, C, E} 2
The Apriori Algorithm (Pseudo-Code)
Ck: Candidate itemset of size k
Lk : frequent itemset of size k

L1 = {frequent items};
for (k = 1; Lk !=; k++) do begin
Ck+1 = candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1 that are contained in
t
Lk+1 = candidates in Ck+1 with min_support
end
return k Lk;
Implementation of Apriori
• How to generate candidates?
• Step 1: self-joining Lk
• Step 2: pruning
• Example of Candidate-generation
• L3={abc, abd, acd, ace, bcd}
• Self-joining: L3*L3
• abcd from abc and abd
• acde from acd and ace
• Pruning:
• acde is removed because ade is not in L3
• C4 = {abcd}
Why counting supports of candidates a
problem?

How to • The total number of candidates can be very huge


• One transaction may contain many candidates

Count Method:
Supports of • Candidate itemsets are stored in a hash-tree

Candidates? • Leaf node of hash-tree contains a list of itemsets


and counts
• Interior node contains a hash table
• Subset function: finds all the candidates
contained in a transaction
Counting Supports of Candidates Using Hash Tree

Subset function
Transaction: 1 2 3 5 6
3,6,9
1,4,7
2,5,8

1+2356

13+56 234
567
145 345 356 367
136 368
357
12+356
689
124
457 125 159
458
Candidate Generation: An SQL Implementation
• SQL Implementation of candidate generation
• Suppose the items in Lk-1 are listed in an order
• Step 1: self-joining Lk-1
insert into Ck
select p.item1, p.item2, …, p.itemk-1, q.itemk-1
from Lk-1 p, Lk-1 q
where p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-1
• Step 2: pruning
forall itemsets c in Ck do
forall (k-1)-subsets s of c do
if (s is not in Lk-1) then delete c from Ck
• Use object-relational extensions like UDFs, BLOBs, and Table functions for efficient
implementation
Major computational challenges
• Multiple scans of transaction database
Further • Huge number of candidates
• Tedious workload of support counting for
Improvement candidates

of the Apriori Improving Apriori: general ideas


Method • Reduce passes of transaction database
scans
• Shrink number of candidates
• Facilitate support counting of candidates
Partition: Scan Database Only Twice
• Any itemset that is potentially frequent in DB must be frequent
in at least one of the partitions of DB
• Scan 1: partition database and find local frequent patterns
• Scan 2: consolidate global frequent patterns

DB1 + DB2 + + DBk = DB


sup1(i) < σDB1 sup2(i) < σDB2 supk(i) < σDBk sup(i) < σDB
DHP: Reduce the Number of Candidates
• A k-itemset whose corresponding hashing bucket count is below the threshold
cannot be frequent
count itemsets
• Candidates: a, b, c, d, e {ab, ad, ae}
35
• Hash entries 88 {bd, be, de}

• {ab, ad, ae} . .


.
• {bd, be, de}
.
. .
• …
102 {yz, qs, wt}
• Frequent 1-itemset: a, b, d, e
Hash Table
• ab is not a candidate 2-itemset if the sum of count of {ab, ad, ae} is below
support threshold
• J. Park, M. Chen, and P. Yu. An effective hash-based algorithm for mining
association rules. SIGMOD’95
Select a sample of original database, mine frequent
patterns within sample using Apriori

Sampling Scan database once to verify frequent itemsets


found in sample, only borders of closure of frequent
for patterns are checked
• Example: check abcd instead of ab, ac, …, etc.

Frequent Scan database again to find missed frequent


patterns
Patterns
H. Toivonen. Sampling large databases for
association rules. In VLDB’96
DIC: Reduce Number of Scans

ABCD
• Once both A and D are determined frequent,
the counting of AD begins
ABC ABD ACD BCD • Once all length-2 subsets of BCD are
determined frequent, the counting of BCD
begins
AB AC BC AD BD CD
Transactions
1-itemsets
A B C D
Apriori 2-itemsets

{}
Itemset lattice 1-itemsets
2-items
S. Brin R. Motwani, J. Ullman,
and S. Tsur. Dynamic itemset DIC 3-items
counting and implication rules for
market basket data. SIGMOD’97
Scalable Frequent Itemset Mining Methods

• Apriori: A Candidate Generation-and-Test Approach

• Improving the Efficiency of Apriori

• FPGrowth: A Frequent Pattern-Growth Approach

• ECLAT: Frequent Pattern Mining with Vertical Data Format

• Mining Close Frequent Patterns and Maxpatterns

23
Pattern-Growth Approach: Mining Frequent Patterns
Without Candidate Generation

• Bottlenecks of the Apriori approach


• Breadth-first (i.e., level-wise) search
• Candidate generation and test
• Often generates a huge number of candidates
• The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)
• Depth-first search
• Avoid explicit candidate generation
• Major philosophy: Grow long patterns from short ones using local frequent
items only
• “abc” is a frequent pattern
• Get all transactions having “abc”, i.e., project DB on abc: DB|abc
• “d” is a local frequent item in DB|abc → abcd is a frequent pattern
Construct FP-tree from a Transaction Database

TID Items bought (ordered) frequent items


100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 {a, b, c, f, l, m, o} {f, c, a, b, m}
300 {b, f, h, j, o, w} {f, b} min_support = 3
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} {}
Header Table
1. Scan DB once, find
frequent 1-itemset (single Item frequency head f:4 c:1
item pattern) f 4
c 4 c:3 b:1 b:1
2. Sort frequent items in a 3
frequency descending b 3 a:3 p:1
order, f-list m 3
p 3
3. Scan DB again, construct m:2 b:1
FP-tree
F-list = f-c-a-b-m-p p:2 m:1
Partition Patterns and Databases
• Frequent patterns can be partitioned into subsets
according to f-list
• F-list = f-c-a-b-m-p
• Patterns containing p
• Patterns having m but no p
• …
• Patterns having c but no a nor b, m, p
• Pattern f
• Completeness and non-redundency
Find Patterns Having P From P-conditional Database

• Starting at the frequent item header table in the FP-tree


• Traverse the FP-tree by following the link of each frequent item p
• Accumulate all of transformed prefix paths of item p to form p’s
conditional pattern base

{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1 c f:3
a 3
b 3 a:3 p:1 a fc:3
m 3 b fca:1, f:1, c:1
p 3 m:2 b:1 m fca:2, fcab:1

p:2 m:1 p fcam:2, cb:1


From Conditional Pattern-bases to Conditional FP-trees

• For each pattern-base


• Accumulate the count for each item in the base
• Construct the FP-tree for the frequent items of the
pattern base

m-conditional pattern base:


{} fca:2, fcab:1
Header Table
Item frequency head All frequent
f:4 c:1 patterns relate to m
f 4 {}
c 4 c:3 b:1 b:1 m,

a 3 f:3  fm, cm, am,
b 3 a:3 p:1 fcm, fam, cam,
m 3 c:3 fcam
p 3 m:2 b:1
p:2 m:1 a:3
m-conditional FP-tree
Recursion: Mining Each Conditional FP-tree

{}

{} Cond. pattern base of “am”: (fc:3) f:3

c:3
f:3
am-conditional FP-tree
c:3 {}
Cond. pattern base of “cm”: (f:3)
a:3 f:3
m-conditional FP-tree
cm-conditional FP-tree

{}

Cond. pattern base of “cam”: (f:3) f:3


cam-conditional FP-tree
A Special Case: Single Prefix Path in FP-tree

• Suppose a (conditional) FP-tree T has a shared single


prefix-path P
• Mining can be decomposed into two parts
{}
• Reduction of the single prefix path into one node
a1:n1
• Concatenation of the mining results of the two parts
a2:n2

a3:n3
{} r1

b1:m1 C1:k1 a1:n1


 r1 = + b1:m1 C1:k1
a2:n2
C2:k2 C3:k3
a3:n3 C2:k2 C3:k3
Benefits of the FP-tree Structure

• Completeness
• Preserve complete information for frequent pattern mining
• Never break a long pattern of any transaction
• Compactness
• Reduce irrelevant info—infrequent items are gone
• Items in frequency descending order: the more frequently
occurring, the more likely to be shared
• Never be larger than the original database (not count node-
links and the count field)
The Frequent Pattern Growth Mining Method

• Idea: Frequent pattern growth


• Recursively grow frequent patterns by pattern and database
partition
• Method
• For each frequent item, construct its conditional pattern-
base, and then its conditional FP-tree
• Repeat the process on each newly created conditional FP-
tree
• Until the resulting FP-tree is empty, or it contains only one
path—single path will generate all the combinations of its
sub-paths, each of which is a frequent pattern
Scaling FP-growth by Database Projection

• What about if FP-tree cannot fit in memory?


• DB projection
• First partition a database into a set of projected DBs
• Then construct and mine FP-tree for each projected DB
• Parallel projection vs. partition projection techniques
• Parallel projection
• Project the DB in parallel for each frequent item
• Parallel projection is space costly
• All the partitions can be processed in parallel
• Partition projection
• Partition the DB based on the ordered frequent items
• Passing the unprocessed parts to the subsequent partitions
Partition-Based Projection

• Parallel projection needs a lot of Tran. DB


disk space fcamp
fcabm
• Partition projection saves it fb
cbp
fcamp

p-proj DB m-proj DB b-proj DB a-proj DB c-proj DB f-proj DB


fcam fcab f fc f …
cb fca cb … …
fcam fca …

am-proj DB cm-proj DB
fc f …
fc f
fc f
Performance of FPGrowth in Large Datasets

100
140
90 D1 FP-grow th runtime D2 FP-growth
80
D1 Apriori runtime 120 D2 TreeProjection
70 100

Runtime (sec.)
Run time(sec.)

60
80
50 Data set T25I20D10K Data set T25I20D100K
40 60

30 40
20
20
10
0 0
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2
Support threshold(%)
Support threshold (%)

FP-Growth vs. Apriori FP-Growth vs. Tree-Projection


Advantages of the Pattern Growth Approach

• Divide-and-conquer:
• Decompose both the mining task and DB according to the frequent
patterns obtained so far
• Lead to focused search of smaller databases
• Other factors
• No candidate generation, no candidate test
• Compressed database: FP-tree structure
• No repeated scan of entire database
• Basic ops: counting local freq items and building sub FP-tree, no pattern
search and matching
• A good open-source implementation and refinement of FPGrowth
• FPGrowth+ (Grahne and J. Zhu, FIMI'03)
Further Improvements of Mining Methods

• AFOPT (Liu, et al. @ KDD’03)


• A “push-right” method for mining condensed frequent pattern (CFP)
tree

• Carpenter (Pan, et al. @ KDD’03)


• Mine data sets with small rows but numerous columns
• Construct a row-enumeration tree for efficient mining

• FPgrowth+ (Grahne and Zhu, FIMI’03)


• Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc. ICDM'03
Int. Workshop on Frequent Itemset Mining Implementations (FIMI'03),
Melbourne, FL, Nov. 2003

• TD-Close (Liu, et al, SDM’06)


Extension of Pattern Growth Mining Methodology

• Mining closed frequent itemsets and max-patterns


• CLOSET (DMKD’00), FPclose, and FPMax (Grahne & Zhu, Fimi’03)
• Mining sequential patterns
• PrefixSpan (ICDE’01), CloSpan (SDM’03), BIDE (ICDE’04)
• Mining graph patterns
• gSpan (ICDM’02), CloseGraph (KDD’03)
• Constraint-based mining of frequent patterns
• Convertible constraints (ICDE’01), gPrune (PAKDD’03)
• Computing iceberg data cubes with complex measures
• H-tree, H-cubing, and Star-cubing (SIGMOD’01, VLDB’03)
• Pattern-growth-based Clustering
• MaPle (Pei, et al., ICDM’03)
• Pattern-Growth-Based Classification
• Mining frequent and discriminative patterns (Cheng, et al, ICDE’07)
Scalable Frequent Itemset Mining Methods

• Apriori: A Candidate Generation-and-Test Approach

• Improving the Efficiency of Apriori

• FPGrowth: A Frequent Pattern-Growth Approach

• ECLAT: Frequent Pattern Mining with Vertical Data Format

• Mining Close Frequent Patterns and Maxpatterns


ECLAT: Mining by Exploring Vertical Data Format

• Vertical format: t(AB) = {T11, T25, …}


• tid-list: list of trans.-ids containing an itemset
• Deriving frequent patterns based on vertical intersections
• t(X) = t(Y): X and Y always happen together
• t(X)  t(Y): transaction having X always has Y
• Using diffset to accelerate mining
• Only keep track of differences of tids
• t(X) = {T1, T2, T3}, t(XY) = {T1, T3}
• Diffset (XY, X) = {T2}
• Eclat (Zaki et al. @KDD’97)
• Mining Closed patterns using vertical format: CHARM (Zaki &
Hsiao@SDM’02)
Scalable Frequent Itemset Mining Methods

• Apriori: A Candidate Generation-and-Test Approach

• Improving the Efficiency of Apriori

• FPGrowth: A Frequent Pattern-Growth Approach

• ECLAT: Frequent Pattern Mining with Vertical Data Format

• Mining Close Frequent Patterns and Maxpatterns

41
Mining Frequent Closed Patterns: CLOSET

• Flist: list of all frequent items in support ascending order


• Flist: d-a-f-e-c Min_sup=2
TID Items
• Divide search space 10 a, c, d, e, f
20 a, b, e
• Patterns having d
30 c, e, f
• Patterns having d but no a, etc. 40 a, c, d, f
50 c, e, f
• Find frequent closed pattern recursively
• Every transaction having d also has cfa → cfad is a frequent
closed pattern
• J. Pei, J. Han & R. Mao. “CLOSET: An Efficient Algorithm for Mining
Frequent Closed Itemsets", DMKD'00.
CLOSET+: Mining Closed Itemsets by Pattern-Growth

• Itemset merging: if Y appears in every occurrence of X, then Y is


merged with X
• Sub-itemset pruning: if Y ‫ כ‬X, and sup(X) = sup(Y), X and all of X’s
descendants in the set enumeration tree can be pruned
• Hybrid tree projection
• Bottom-up physical tree-projection
• Top-down pseudo tree-projection
• Item skipping: if a local frequent item has the same support in several
header tables at different levels, one can prune it from the header
table at higher levels
• Efficient subset checking
MaxMiner: Mining Max-Patterns
• 1st scan: find frequent items Tid Items
10 A, B, C, D, E
• A, B, C, D, E 20 B, C, D, E,
• 2nd scan: find support for 30 A, C, D, F

• AB, AC, AD, AE, ABCDE


• BC, BD, BE, BCDE
Potential
• CD, CE, CDE, DE max-patterns
• Since BCDE is a max-pattern, no need to check BCD, BDE, CDE in
later scan
• R. Bayardo. Efficiently mining long patterns from databases.
SIGMOD’98
CHARM: Mining by Exploring Vertical Data Format

• Vertical format: t(AB) = {T11, T25, …}


• tid-list: list of trans.-ids containing an itemset
• Deriving closed patterns based on vertical intersections
• t(X) = t(Y): X and Y always happen together
• t(X)  t(Y): transaction having X always has Y
• Using diffset to accelerate mining
• Only keep track of differences of tids
• t(X) = {T1, T2, T3}, t(XY) = {T1, T3}
• Diffset (XY, X) = {T2}
• Eclat/MaxEclat (Zaki et al. @KDD’97), VIPER(P. Shenoy et
al.@SIGMOD’00), CHARM (Zaki & Hsiao@SDM’02)
Visualization of Association Rules: Plane Graph
Visualization of Association Rules: Rule Graph
Visualization of Association Rules
(SGI/MineSet 3.0)
Chapter 5: Mining Frequent Patterns, Association and
Correlations: Basic Concepts and Methods

• Basic Concepts

• Frequent Itemset Mining Methods

• Which Patterns Are Interesting?—Pattern Evaluation

Methods

• Summary
Interestingness Measure: Correlations (Lift)

• play basketball  eat cereal [40%, 66.7%] is misleading


• The overall % of students eating cereal is 75% > 66.7%.

• play basketball  not eat cereal [20%, 33.3%] is more accurate, although with
lower support and confidence

• Measure of dependent/correlated events: lift


P( A B) Basketball Not basketball Sum (row)
lift = Cereal 2000 1750 3750
P( A) P( B)
Not cereal 1000 250 1250
2000 / 5000
lift( B, C ) = = 0.89 Sum(col.) 3000 2000 5000
3000 / 5000* 3750 / 5000
1000 / 5000
lift( B, C ) = = 1.33
3000 / 5000*1250 / 5000
Are lift and 2 Good Measures of Correlation?

• “Buy walnuts  buy milk


[1%, 80%]” is misleading if
85% of customers buy milk

• Support and confidence


are not good to indicate
correlations

• Over 20 interestingness
measures have been
proposed (see Tan, Kumar,
Sritastava @KDD’02)

• Which are good ones?


Null-Invariant Measures
Comparison of Interestingness Measures
• Null-(transaction) invariance is crucial for correlation analysis
• Lift and 2 are not null-invariant
• 5 null-invariant measures

Milk No Milk Sum (row)


Coffee m, c ~m, c c
No Coffee m, ~c ~m, ~c ~c
Sum(col.) m ~m 

Null-transactions Kulczynski
w.r.t. m and c measure (1927) Null-invariant

Subtle: They disagree


Analysis of DBLP Coauthor Relationships

Recent DB conferences, removing balanced associations, low sup, etc.

Advisor-advisee relation: Kulc: high,


coherence: low, cosine: middle
• Tianyi Wu, Yuguo Chen and Jiawei Han, “Association Mining in Large Databases:
A Re-Examination of Its Measures”, Proc. 2007 Int. Conf. Principles and Practice
of Knowledge Discovery in Databases (PKDD'07), Sept. 2007
Which Null-Invariant Measure Is Better?
• IR (Imbalance Ratio): measure the imbalance of two itemsets A
and B in rule implications

• Kulczynski and Imbalance Ratio (IR) together present a clear


picture for all the three datasets D4 through D6
• D4 is balanced & neutral
• D5 is imbalanced & neutral
• D6 is very imbalanced & neutral
Chapter 5: Mining Frequent Patterns, Association and
Correlations: Basic Concepts and Methods

• Basic Concepts

• Frequent Itemset Mining Methods

• Which Patterns Are Interesting?—Pattern Evaluation

Methods

• Summary
Summary
• Basic concepts: association rules, support-confident
framework, closed and max-patterns
• Scalable frequent pattern mining methods
• Apriori (Candidate generation & test)
• Projection-based (FPgrowth, CLOSET+, ...)
• Vertical format approach (ECLAT, CHARM, ...)

▪ Which patterns are interesting?


▪ Pattern evaluation methods
Ref: Basic Concepts of Frequent Pattern Mining

• (Association Rules) R. Agrawal, T. Imielinski, and A. Swami. Mining


association rules between sets of items in large databases. SIGMOD'93
• (Max-pattern) R. J. Bayardo. Efficiently mining long patterns from
databases. SIGMOD'98
• (Closed-pattern) N. Pasquier, Y. Bastide, R. Taouil, and L. Lakhal. Discovering
frequent closed itemsets for association rules. ICDT'99
• (Sequential pattern) R. Agrawal and R. Srikant. Mining sequential patterns.
ICDE'95
Ref: Apriori and Its Improvements
• R. Agrawal and R. Srikant. Fast algorithms for mining association rules. VLDB'94
• H. Mannila, H. Toivonen, and A. I. Verkamo. Efficient algorithms for discovering
association rules. KDD'94
• A. Savasere, E. Omiecinski, and S. Navathe. An efficient algorithm for mining
association rules in large databases. VLDB'95
• J. S. Park, M. S. Chen, and P. S. Yu. An effective hash-based algorithm for mining
association rules. SIGMOD'95
• H. Toivonen. Sampling large databases for association rules. VLDB'96
• S. Brin, R. Motwani, J. D. Ullman, and S. Tsur. Dynamic itemset counting and
implication rules for market basket analysis. SIGMOD'97
• S. Sarawagi, S. Thomas, and R. Agrawal. Integrating association rule mining with
relational database systems: Alternatives and implications. SIGMOD'98
Ref: Depth-First, Projection-Based FP Mining
• R. Agarwal, C. Aggarwal, and V. V. V. Prasad. A tree projection algorithm for generation of
frequent itemsets. J. Parallel and Distributed Computing, 2002.

• G. Grahne and J. Zhu, Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc. FIMI'03

• B. Goethals and M. Zaki. An introduction to workshop on frequent itemset mining


implementations. Proc. ICDM’03 Int. Workshop on Frequent Itemset Mining Implementations
(FIMI’03), Melbourne, FL, Nov. 2003

• J. Han, J. Pei, and Y. Yin. Mining frequent patterns without candidate generation. SIGMOD’ 00

• J. Liu, Y. Pan, K. Wang, and J. Han. Mining Frequent Item Sets by Opportunistic Projection.
KDD'02

• J. Han, J. Wang, Y. Lu, and P. Tzvetkov. Mining Top-K Frequent Closed Patterns without
Minimum Support. ICDM'02

• J. Wang, J. Han, and J. Pei. CLOSET+: Searching for the Best Strategies for Mining Frequent
Closed Itemsets. KDD'03
Ref: Vertical Format and Row Enumeration Methods

• M. J. Zaki, S. Parthasarathy, M. Ogihara, and W. Li. Parallel algorithm for


discovery of association rules. DAMI:97.

• M. J. Zaki and C. J. Hsiao. CHARM: An Efficient Algorithm for Closed Itemset


Mining, SDM'02.

• C. Bucila, J. Gehrke, D. Kifer, and W. White. DualMiner: A Dual-Pruning


Algorithm for Itemsets with Constraints. KDD’02.

• F. Pan, G. Cong, A. K. H. Tung, J. Yang, and M. Zaki , CARPENTER: Finding


Closed Patterns in Long Biological Datasets. KDD'03.

• H. Liu, J. Han, D. Xin, and Z. Shao, Mining Interesting Patterns from Very High
Dimensional Data: A Top-Down Row Enumeration Approach, SDM'06.
Ref: Mining Correlations and Interesting Rules

• S. Brin, R. Motwani, and C. Silverstein. Beyond market basket: Generalizing


association rules to correlations. SIGMOD'97.
• M. Klemettinen, H. Mannila, P. Ronkainen, H. Toivonen, and A. I. Verkamo. Finding
interesting rules from large sets of discovered association rules. CIKM'94.
• R. J. Hilderman and H. J. Hamilton. Knowledge Discovery and Measures of Interest.
Kluwer Academic, 2001.
• C. Silverstein, S. Brin, R. Motwani, and J. Ullman. Scalable techniques for mining
causal structures. VLDB'98.
• P.-N. Tan, V. Kumar, and J. Srivastava. Selecting the Right Interestingness Measure
for Association Patterns. KDD'02.
• E. Omiecinski. Alternative Interest Measures for Mining Associations. TKDE’03.
• T. Wu, Y. Chen, and J. Han, “Re-Examination of Interestingness Measures in Pattern
Mining: A Unified Framework", Data Mining and Knowledge Discovery, 21(3):371-
397, 2010

You might also like