0% found this document useful (0 votes)
17 views

DWDWM Unit2

The document summarizes frequent pattern mining and association rule learning. It discusses: 1) Frequent pattern mining aims to find patterns that occur frequently in a dataset, like products that are often purchased together. 2) The Apriori algorithm is described as a seminal method that uses a generate-and-test approach to iteratively find frequent itemsets. It exploits the downward closure property to avoid unnecessary candidate generations. 3) Other methods like FP-Growth use alternative approaches like pattern growth to more efficiently mine frequent patterns from transactional data in a scalable manner.

Uploaded by

Apoorva Rauniyar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

DWDWM Unit2

The document summarizes frequent pattern mining and association rule learning. It discusses: 1) Frequent pattern mining aims to find patterns that occur frequently in a dataset, like products that are often purchased together. 2) The Apriori algorithm is described as a seminal method that uses a generate-and-test approach to iteratively find frequent itemsets. It exploits the downward closure property to avoid unnecessary candidate generations. 3) Other methods like FP-Growth use alternative approaches like pattern growth to more efficiently mine frequent patterns from transactional data in a scalable manner.

Uploaded by

Apoorva Rauniyar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

Data Mining:

Concepts and Techniques

1
Mining Frequent Patterns, Association and
Correlations: Basic Concepts and Methods

Basic Concepts

Frequent Itemset Mining Methods

Which Patterns Are Interesting?—Pattern

Evaluation Methods

Summary

3
What Is Frequent Pattern Analysis?
Frequent pattern: a pattern (a set of items, subsequences, substructures,
etc.) that occurs frequently in a data set
First proposed by Agrawal, Imielinski, and Swami [AIS93] in the context of
frequent itemsets and association rule mining
Motivation: Finding inherent regularities in data
What products were often purchased together?— Beer and diapers?!
What are the subsequent purchases after buying a PC?
What kinds of DNA are sensitive to this new drug?
Can we automatically classify web documents?

Applications
Basket data analysis, cross-marketing, catalog design, sale campaign
analysis, Web log (click stream) analysis, and DNA sequence
analysis. 4
Why Is Freq. Pattern Mining Important?

Freq. pattern: An intrinsic and important property of


datasets
Foundation for many essential data mining tasks
Association, correlation, and causality analysis
Sequential, structural (e.g., sub-graph) patterns
Pattern analysis in spatiotemporal, multimedia, time-
series, and stream data
Classification: discriminative, frequent pattern analysis
Cluster analysis: frequent pattern-based clustering
Data warehousing: iceberg cube and cube-gradient
Semantic data compression: fascicles
Broad applications
4
Basic Concepts: Frequent Patterns

Tid Items bought itemset: A set of one or more


10 Beer, Nuts, Diaper items
20 Beer, Coffee, Diaper k-itemset X = {x1, …, xk}
30 Beer, Diaper, Eggs
(absolute) support, or, support
40 Nuts, Eggs, Milk count of X: Frequency or
50 Nuts, Coffee, Diaper, Eggs, Milk occurrence of an itemset X
Customer Customer (relative) support, s, is the
buys both buys diaper fraction of transactions that
contains X (i.e., the probability
that a transaction contains X)
An itemset X is frequent if X’s
support is no less than a minsup
Customer
buys beer threshold

5
Basic Concepts: Association Rules
Tid
Items bought
Find all the rules X Y with
10 Beer, Nuts, Diaper
20 Beer, Coffee, Diaper
minimum support and confidence
30 Beer, Diaper, Eggs support, s, probability that a
40 Nuts, Eggs, Milk transaction contains X  Y
50 Nuts, Coffee, Diaper, Eggs, Milk
confidence, c, conditional
Customer
buys both
Customer probability that a transaction
buys
diaper having X also contains Y
Let minsup = 50%, minconf = 50%
Freq. Pat.: Beer:3, Nuts:3, Diaper:4, Eggs:3,
Customer {Beer, Diaper}:3
buys beer Association rules: (many more!)
Beer Diaper (60%, 100%)
Diaper Beer (60%, 75%)
6
Closed Patterns and Max-Patterns
A long pattern contains a combinatorial number of sub-
patterns, e.g., {a1, …, a100} contains (100)1 + (100)2+ … + ( 101000)=
2100– 1 = 1.27*1030 sub-patterns!
Solution: Mine closed patterns and max-patterns instead
An itemset X is closed if X is frequent and there exists no
super-pattern Y ‫כ‬X, with the same support as X
(proposed by Pasquier, et al. @ ICDT’99)
An itemset X is a max-pattern if X is frequent and there
exists no frequent super-pattern Y ‫כ‬X (proposed by
Bayardo @ SIGMOD’98)
Closed pattern is a lossless compression of freq. patterns
Reducing the # of patterns and rules
7
Closed Patterns and Max-Patterns
Exercise: Suppose a DB contains only two transactions
<a1, …, a100>, <a1, …, a50>
Let min_sup = 1
What is the set of closed itemset?
{a1, …, a100}: 1
{a1, …, a50}: 2
What is the set of max-pattern?
{a1, …, a100}: 1
What is the set of all patterns?
{a1}: 2, …, {a1, a2}: 2, …, {a1, a51}: 1, …, {a1, a2, …, a100}: 1
A big number: 2100- 1? Why? 9
Chapter 5: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

Basic Concepts

Frequent Itemset Mining Methods

Which Patterns Are Interesting?—Pattern

Evaluation Methods

Summary

9
Scalable Frequent Itemset Mining Methods

Apriori: A Candidate Generation-and-Test

Approach

Improving the Efficiency of Apriori

FPGrowth: A Frequent Pattern-Growth Approach

ECLAT: Frequent Pattern Mining with Vertical

Data Format
10
The Downward Closure Property and Scalable
Mining Methods
The downward closure property of frequent patterns
Any subset of a frequent itemset must be frequent
If {beer, diaper, nuts} is frequent, so is { beer,
diaper}
i.e., every transaction having {beer, diaper, nuts} also
contains {beer, diaper}
Scalable mining methods: Three major approaches
Apriori (Agrawal & Srikant@VLDB’94)
Freq. pattern growth (FPgrowth—Han, Pei & Yin
@SIGMOD’00)
Vertical data format approach (Charm—Zaki & Hsiao
@SDM’02)
11
Apriori: A Candidate Generation & Test Approach

Apriori pruning principle: If there is any itemset which is


infrequent, its superset should not be generated/tested!
(Agrawal & Srikant @VLDB’94, Mannila, et al. @ KDD’ 94)
Method:
Initially, scan DB once to get frequent 1-itemset
Generate length (k+1) candidate itemsets from length k
frequent itemsets
Test the candidates against DB
Terminate when no frequent or candidate set can be
generated
12
The Apriori Algorithm—An Example
Supmin = 2 Itemset sup
Database TDB Itemset sup
{A} 2 L1
C1 {A} 2
Tid Items {B} 3
{B} 3
10 A, C, D {C} 3
20 B, C, E 1st scan {D} 1
{C} 3
{E} 3
30 A, B, C, E {E} 3
40 B, E
C2 Itemset sup C2 Itemset
L2 {A, B} 1
Itemset sup 2nd scan {A, B}
{A, C} 2
{A, C} 2 {A, C}
{A, E} 1
{B, C} 2 {A, E}
{B, C} 2
{B, E} 3 {B, C}
{B, E} 3
{C, E} 2
{C, E} 2 {B, E}
{C, E}

C3 Itemset 3rd scan L3 Itemset sup


{B, C, E} {B, C, E} 2
13
The Apriori Algorithm (Pseudo-Code)

Ck: Candidate itemset of size k


Lk: frequent itemset of size k

L1= {frequent items};


for (k = 1; Lk !=; k++) do begin
Ck+1= candidates generated from Lk;
for each transaction t in database do
increment the count of all candidates in Ck+1that are
contained in t
Lk+1 = candidates in Ck+1with min_support
end
return k Lk; 15
Implementation of Apriori

How to generate candidates?


Step 1: self-joining Lk
Step 2: pruning
Example of Candidate-generation
L3={abc, abd, acd, ace, bcd}
Self-joining: L3*L3
abcd from abc and abd
acde from acd and ace
Pruning:
acde is removed because ade is not in L3
C4= {abcd}
16
Candidate Generation: An SQL Implementation
SQL Implementation of candidate generation
Suppose the items in Lk-1are listed in an order
Step 1: self-joining Lk-1
insert into Ck
select p.item 1, p.item 2, …, p.item k-1, q.item k-1
from Lk-1p, Lk-1q
where p.item 1=q.item 1, …, p.item k-2=q.item k-2, p.item k-1< q.item k-1
Step 2: pruning
forall itemsets c in C kdo
forall (k- 1)- subsets s of c do
if (s is not in Lk-1) then delete c from Ck
Use object-relational extensions like UDFs, BLOBs, and Table functions for
efficient implementation [S. Sarawagi, S. Thomas, and R. Agrawal. Integrating
association rule mining with relational database systems: Alternatives and
implications. SIGMOD’98]
19
Scalable Frequent Itemset Mining Methods

Apriori: A Candidate Generation-and-Test Approach

Improving the Efficiency of Apriori

FPGrowth: A Frequent Pattern-Growth Approach

ECLAT: Frequent Pattern Mining with Vertical Data

Format

Mining Close Frequent Patterns and Maxpatterns

20
Further Improvement of the Apriori Method

Major computational challenges


Multiple scans of transaction database
Huge number of candidates

Tedious workload of support counting for candidates


Improving Apriori: general ideas
Reduce passes of transaction database scans
Shrink number of candidates

Facilitate support counting of candidates

21
Partition: Scan Database OnlyTwice
Any itemset that is potentially frequent in DB must be
frequent in at least one of the partitions of DB
Scan 1: partition database and find local frequent
patterns
Scan 2: consolidate global frequent patterns
A. Savasere, E. Omiecinski and S. Navathe, VLDB’95

DB1 + DB2 + + DBk = DB


sup1(i) < σDB1 sup2(i) < σDB2 supk(i) < σDBk sup(i) < σDB
DHP: Reduce the Number of Candidates

A k-itemset whose corresponding hashing bucket count is below the


threshold cannot be frequent count itemsets
Candidates: a, b, c, d, e 35 {ab, ad, ae}
88 {bd, be, de}
Hash entries
.
{ab, ad, ae}
.
. .
.
{bd, be, de} .

… 102 {yz, qs, wt}


Hash Table
Frequent 1-itemset: a, b, d, e
ab is not a candidate 2-itemset if the sum of count of {ab, ad, ae} is
below support threshold
J. Park, M. Chen, and P. Yu. An effective hash-based algorithm for
mining association rules. SIGMOD’95
23
Sampling for Frequent Patterns

Select a sample of original database, mine frequent


patterns within sample using Apriori
Scan database once to verify frequent itemsets found in
sample, only borders of closure of frequent patterns are
checked
Example: check abcd instead of ab, ac, …, etc.
Scan database again to find missed frequent patterns
H. Toivonen. Sampling large databases for association
rules. In VLDB’96

24
DIC: Reduce Number of Scans

ABCD
Once both A and D are determined
frequent, the counting of AD begins
ABC ABD ACD BCD Once all length-2 subsets of BCD are
determined frequent, the counting of BCD
begins
AB AC BC AD BD CD
Transactions
1-itemsets
A B C D
Apriori 2-itemsets

{}
Itemset lattice 1-itemsets
S. Brin R. Motwani, J. Ullman, 2-items
and S. Tsur. Dynamic itemset DIC 3-items
counting and implication rules for
market basket data. In
SIGMOD’97 25
Scalable Frequent Itemset Mining Methods

Apriori: A Candidate Generation-and-Test Approach

Improving the Efficiency of Apriori

FPGrowth: A Frequent Pattern-Growth Approach

ECLAT: Frequent Pattern Mining with Vertical Data

Format

Mining Close Frequent Patterns and Maxpatterns

26
Pattern-Growth Approach: Mining Frequent
Patterns Without Candidate Generation
Bottlenecks of the Apriori approach
Breadth-first (i.e., level-wise) search
Candidate generation and test
Often generates a huge number of candidates
The FPGrowth Approach (J. Han, J. Pei, and Y. Yin, SIGMOD’ 00)
Depth-first search
Avoid explicit candidate generation
Major philosophy: Grow long patterns from short ones using local
frequent items only
“abc” is a frequent pattern
Get all transactions having “abc”, i.e., project DB on abc: DB|abc
“d” is a local frequent item in DB|abc abcd is a frequent pattern
27
Construct FP-tree from a Transaction Database

TID Items bought (ordered) frequent items


100 {f, a, c, d, g, i, m, p} {f, c, a, m, p}
200 {a, b, c, f, l, m, o} {f, c, a, b, m}
300 {b, f, h, j, o, w} {f, b} min_support = 3
400 {b, c, k, s, p} {c, b, p}
500 {a, f, c, e, l, p, m, n} {f, c, a, m, p} {}
Header Table
1. Scan DB once, find
frequent 1-itemset (single Item frequency head f:4 c:1
item pattern) f 4
c 4 c:3 b:1 b:1
2. Sort frequent items in a 3
frequency descending b 3 a:3 p:1
order, f-list m 3
p 3
3. Scan DB again, construct m:2 b:1
FP-tree
F-list = f-c-a-b-m-p p:2 m:1
28
Partition Patterns and Databases

Frequent patterns can be partitioned into subsets


according to f-list
F-list = f-c-a-b-m-p
Patterns containing p
Patterns having m but no p

Patterns having c but no a nor b, m, p
Pattern f
Completeness and non-redundency

29
Find Patterns Having P From P-conditional Database

Starting at the frequent item header table in the FP-tree


Traverse the FP-tree by following the link of each frequent item p
Accumulate all of transformed prefix paths of item p to form p’s
conditional pattern base

{}
Header Table
f:4 c:1 Conditional pattern bases
Item frequency head
f 4 item cond. pattern base
c 4 c:3 b:1 b:1 c f:3
a 3
b 3 a:3 p:1 a fc:3
m 3 b fca:1, f:1, c:1
p 3 m:2 b:1
m fca:2, fcab:1
p:2 m:1 p fcam:2, cb:1
30
From Conditional Pattern-bases to Conditional FP-trees

For each pattern-base


Accumulate the count for each item in the base
Construct the FP-tree for the frequent items of the
pattern base

m-conditional pattern base:


{} fca:2, fcab:1
Header Table
Item frequency head All frequent
f:4 c:1 patterns relate to m
f 4 {}
c 4 c:3 b:1 m,
b:1
a 3 f:3 fm, cm, am,
b 3 a:3 p:1 fcm, fam, cam,
m 3 c:3 fcam
m:2 b:1
p 3
p:2 m:1 a:3
m-conditional FP-tree
31
Recursion: Mining Each Conditional FP-tree
{}

{} Cond. pattern base of “am”: (fc:3) f:3

c:3
f:3
am-conditional FP-tree
c:3 {}
Cond. pattern base of “cm”: (f:3)
a:3 f:3
m-conditional FP-tree
cm-conditional FP-tree

{}

Cond. pattern base of “cam”: (f:3) f:3


cam-conditional FP-tree

32
A Special Case: Single Prefix Path in FP-tree

Suppose a (conditional) FP-tree T has a shared


single prefix-path P
Mining can be decomposed into two parts
{}
Reduction of the single prefix path into one node
a1:n1 Concatenation of the mining results of the two
a2:n2 parts
a3:n3
{} r1

b1:m1 C1:k1 a1:n1


r1 =
a2:n2
+ b1:m1 C1:k 1

C2:k2 C3:k3
a3:n 3 C2:k2 C3:k3
33
Benefits of the FP-tree Structure

Completeness
Preserve complete information for frequent pattern
mining
Never break a long pattern of any transaction
Compactness
Reduce irrelevant info—infrequent items are gone
Items in frequency descending order: the more
frequently occurring, the more likely to be shared
Never be larger than the original database (not count
node-links and the count field)

34
The Frequent Pattern Growth Mining Method

Idea: Frequent pattern growth


Recursively grow frequent patterns by pattern and
database partition
Method
For each frequent item, construct its conditional
pattern-base, and then its conditional FP-tree
Repeat the process on each newly created conditional
FP-tree
Until the resulting FP-tree is empty, or it contains only
one path—single path will generate all the
combinations of its sub-paths, each of which is a
frequent pattern

35
Scaling FP-growth by Database Projection

What about if FP-tree cannot fit in memory?


DB projection
First partition a database into a set of projected DBs
Then construct and mine FP-tree for each projected DB
Parallel projection vs. partition projection techniques
Parallel projection
Project the DB in parallel for each frequent item
Parallel projection is space costly
All the partitions can be processed in parallel
Partition projection
Partition the DB based on the ordered frequent items
Passing the unprocessed parts to the subsequent partitions

36
Partition-Based Projection

Parallel projection needs a lot Tran. DB


of disk space fcamp
fcabm
Partition projection saves it fb
cbp
fcamp

p-proj DB m-proj DB b-proj DB a-proj DB c-proj DB f-proj DB


fcam fcab f fc f …
cb fca cb … …
fcam fca …

am-proj DB cm-proj DB
fc f …
fc f
fc f
37
FP-Growth vs. Apriori: Scalability With the
Support Threshold

100 Data set T25I20D10K


90 D1 FP-grow th runtime
D1 Apriori runtime
80

70
R u n time(se

60

50

40

30

20

10

0
0 0.5 1 1.5 2 2.5 3
S u p p o r t threshold(%)

38
FP-Growth vs. Tree-Projection: Scalability with
the Support Threshold

Data set T25I20D100K


140
D2 FP-growth
120 D2 TreeProjection

100
Runtime (sec.)

80

60

40

20

0
0 0.5 1 1.5 2
Support threshold (%)
Data Mining: Concepts and Techniques 39
Advantages of the Pattern Growth Approach

Divide-and-conquer:
Decompose both the mining task and DB according to the
frequent patterns obtained so far
Lead to focused search of smaller databases
Other factors
No candidate generation, no candidate test
Compressed database: FP-tree structure
No repeated scan of entire database
Basic ops: counting local freq items and building sub FP-tree, no
pattern search and matching
A good open-source implementation and refinement of FPGrowth
FPGrowth+ (Grahne and J. Zhu, FIMI'03)
40
Further Improvements of Mining Methods

AFOPT (Liu, et al. @ KDD’03)


A “push-right” method for mining condensed frequent pattern
(CFP) tree
Carpenter (Pan, et al. @ KDD’03)
Mine data sets with small rows but numerous columns
Construct a row-enumeration tree for efficient mining

FPgrowth+ (Grahne and Zhu, FIMI’03)


Efficiently Using Prefix-Trees in Mining Frequent Itemsets, Proc.
ICDM'03 Int. Workshop on Frequent Itemset Mining
Implementations (FIMI'03), Melbourne, FL, Nov. 2003
TD-Close (Liu, et al, SDM’06)

41
Extension of Pattern Growth Mining Methodology

Mining closed frequent itemsets and max-patterns


CLOSET (DMKD’00), FPclose, and FPMax (Grahne & Zhu, Fimi’03)
Mining sequential patterns
PrefixSpan (ICDE’01), CloSpan (SDM’03), BIDE (ICDE’04)
Mining graph patterns
gSpan (ICDM’02), CloseGraph (KDD’03)
Constraint-based mining of frequent patterns
Convertible constraints (ICDE’01), gPrune (PAKDD’03)
Computing iceberg data cubes with complex measures
H-tree, H-cubing, and Star-cubing (SIGMOD’01, VLDB’03)
Pattern-growth-based Clustering
MaPle (Pei, et al., ICDM’03)
Pattern-Growth-Based Classification
Mining frequent and discriminative patterns (Cheng, et al, ICDE’07)
42
Scalable Frequent Itemset Mining Methods

Apriori: A Candidate Generation-and-Test Approach

Improving the Efficiency of Apriori

FPGrowth: A Frequent Pattern-Growth Approach

ECLAT: Frequent Pattern Mining with Vertical Data

Format

Mining Close Frequent Patterns and Maxpatterns

43
ECLAT: Mining by Exploring Vertical Data
Format
Vertical format: t(AB) = {T11, T25, …}
tid-list: list of trans.-ids containing an itemset
Deriving frequent patterns based on vertical intersections
t(X) = t(Y): X and Y always happen together
t(X)  t(Y): transaction having X always has Y
Using diffset to accelerate mining
Only keep track of differences of tids
t(X) = {T1, T2, T3}, t(XY) = {T1, T3}

Diffset (XY, X) = {T2}


Eclat (Zaki et al. @KDD’97)
Mining Closed patterns using vertical format: CHARM (Zaki &
Hsiao@SDM’02)
44
Scalable Frequent Itemset Mining Methods

Apriori: A Candidate Generation-and-Test Approach

Improving the Efficiency of Apriori

FPGrowth: A Frequent Pattern-Growth Approach

ECLAT: Frequent Pattern Mining with Vertical Data

Format

Mining Close Frequent Patterns and Maxpatterns

45
Mining Frequent Closed Patterns: CLOSET

Flist: list of all frequent items in support ascending order


Flist: d-a-f-e-c Min_sup=2
TID Items
Divide search space
10 a, c, d, e, f
Patterns having d 20 a, b, e
30 c, e, f
Patterns having d but no a, etc. 40 a, c, d, f
50 c, e, f
Find frequent closed pattern recursively
Every transaction having d also has cfa cfad is a
frequent closed pattern
J. Pei, J. Han & R. Mao. “CLOSET: An Efficient Algorithm for
Mining Frequent Closed Itemsets", DMKD'00.
CLOSET+: Mining Closed Itemsets by Pattern-Growth

Itemset merging: if Y appears in every occurrence of X, then Y


is merged with X
Sub-itemset pruning: if Y ‫כ‬X, and sup(X) = sup(Y), X and all of
X’s descendants in the set enumeration tree can be pruned
Hybrid tree projection
Bottom-up physical tree-projection
Top-down pseudo tree-projection
Item skipping: if a local frequent item has the same support in
several header tables at different levels, one can prune it from
the header table at higher levels
Efficient subset checking
MaxMiner: Mining Max-Patterns
Tid Items
1st scan: find frequent items
10 A, B, C, D, E
A, B, C, D, E 20 B, C, D, E,
2ndscan: find support for 30 A, C, D, F

AB, AC, AD, AE, ABCDE


BC, BD, BE, BCDE
Potential
CD, CE, CDE, DE max-patterns
Since BCDE is a max-pattern, no need to check BCD,
BDE, CDE in later scan
R. Bayardo. Efficiently mining long patterns from
databases. SIGMOD’98
CHARM: Mining by Exploring Vertical Data
Format
Vertical format: t(AB) = {T11, T25, …}
tid-list: list of trans.-ids containing an itemset
Deriving closed patterns based on vertical intersections
t(X) = t(Y): X and Y always happen together
t(X)  t(Y): transaction having X always has Y
Using diffset to accelerate mining
Only keep track of differences of tids
t(X) = {T1, T2, T3}, t(XY) = {T1, T3}
Diffset (XY, X) = {T2}
Eclat/MaxEclat (Zaki et al. @KDD’97), VIPER(P. Shenoy
et al.@SIGMOD’00), CHARM (Zaki & Hsiao@SDM’02)
Visualization of Association Rules: Plane Graph

50
Visualization of Association Rules: Rule Graph

51
Visualization of Association Rules
(SGI/MineSet 3.0)

52
Computational Complexity of Frequent Itemset
Mining
How many itemsets are potentially to be generated in the worst case?
The number of frequent itemsets to be generated is senstive to the
minsup threshold
When minsup is low, there exist potentially an exponential number of
frequent itemsets
The worst case: MNwhere M: # distinct items, and N: max length of
transactions
The worst case complexty vs. the expected probability
Ex. Suppose Walmart has 104 kinds of products
The chance to pick up one product 10-4
The chance to pick up a particular set of 10 products: ~10-40
What is the chance this particular set of 10 products to be frequent
103 times in 109 transactions?

53
Chapter 5: Mining Frequent Patterns, Association
and Correlations: Basic Concepts and Methods

Basic Concepts

Frequent Itemset Mining Methods

Which Patterns Are Interesting?—Pattern

Evaluation Methods

Summary

54
Interestingness Measure: Correlations (Lift)

play basketball  eat cereal [40%, 66.7%] is misleading


The overall % of students eating cereal is 75% > 66.7%.
play basketball  not eat cereal [20%, 33.3%] is more accurate,
although with lower support and confidence
Measure of dependent/correlated events: lift

P( AB)
lift = Cereal
Basketball
2000
Not basketball
1750
Sum (row)
3750
P(A)P(B)
Not cereal 1000 250 1250
2000 / 5000
lift(B, C) = = 0.89 Sum(col.) 3000 2000 5000
3000 / 5000*3750 / 5000
1000 / 5000
lift(B, C) = =1.33
3000 / 5000 *1250 / 5000

55
Are lift and 2 Good Measures of Correlation?

“Buy walnuts  buy


milk [1%, 80%]” is
misleading if 85% of
customers buy milk
Support and confidence
are not good to indicate
correlations
Over 20 interestingness
measures have been
proposed (see Tan,
Kumar, Sritastava
@KDD’02)
Which are good ones?

56
Null-Invariant Measures

57
Comparison of Interestingness Measures
Null-(transaction) invariance is crucial for correlation analysis
Lift and  2 are not null-invariant
5 null-invariant measures

Milk No Milk Sum (row)

Coffee m, c ~m, c c
No Coffee m, ~c ~m, ~c ~c
Sum(col.) m ~m 

Null-transactions w.r.t. m Kulczynski


and c measure (1927) Null-invariant

August 14, 2014 Subtle: They disagree 58


Data Mining: Concepts and Techniques
Analysis of DBLP Coauthor Relationships
Recent DB conferences, removing balanced associations, low sup, etc.

Advisor-advisee relation: Kulc: high,


coherence: low, cosine: middle

Tianyi Wu, Yuguo Chen and Jiawei Han, “


Association Mining in Large Databases: A Re-Examination of Its Measures
”, Proc. 2007 Int. Conf. Principles and Practice of Knowledge Discovery
in Databases (PKDD'07), Sept. 2007
59
Which Null-Invariant Measure Is Better?
IR (Imbalance Ratio): measure the imbalance of two
itemsets A and B in rule implications

Kulczynski and Imbalance Ratio (IR) together present a


clear picture for all the three datasets D4 through D6
D4is balanced & neutral
D5is imbalanced & neutral
D6is very imbalanced & neutral
Mining Frequent Patterns, Association and
Correlations: Basic Concepts and Methods

Basic Concepts

Frequent Itemset Mining Methods

Which Patterns Are Interesting?—Pattern

Evaluation Methods

Summary

61
Summary

Basic concepts: association rules, support-


confident framework, closed and max-patterns
Scalable frequent pattern mining methods
Apriori (Candidate generation & test)
Projection-based (FPgrowth, CLOSET+, ...)
Vertical format approach (ECLAT, CHARM, ...)
Which patterns are interesting?
Pattern evaluation methods

62

You might also like