DM Clustering
DM Clustering
0
d(2,1)
• Dissimilarity matrix 0
d(3,1) d ( 3,2) 0
– (one mode)
: : :
d ( n,1) d ( n,2) ... ... 0
Type of data in clustering
analysis
• Interval-scaled variables
• Binary variables
• Nominal, ordinal, and ratio variables
• Variables of mixed types
Interval-valued variables
• Standardize data
– Calculate
sf 1the mean absolute deviation:
n (| x1 f m f | | x2 f m f | ... | xnf m f |)
where
xif m f measurement (z-score)
– Calculate the standardized
zif sf
d (i, j) bc
• Distance measure for a b c d
symmetric binary d (i, j) bc
variables: a bc
their effects
– f is binary or nominal:
dij(f) = 0 if xif = xjf , or dij(f) = 1 otherwise
– f is interval-based: use the normalized distance r 1
– f is ordinal or ratio-scaled zif
if
M 1
f
N Nmean
• Diameter: square root of average )2
(t tsquared
Dm i 1 i 1 ip iq
N ( Nin1the
distance between all pairs of points ) cluster
Partitioning Algorithms: Basic
Concept
• Partitioning method: Construct a partition of a database D
of n objects into a set of k clusters, s.t., min sum of
squared distance k
m1tmiKm (Cm tmi )2
the
3
each
2 2
2
1
objects
1
0
cluster 1
0
0
0 1 2 3 4 5 6 7 8 9 10 to most
0 1 2 3 4 5 6 7 8 9 10 means 0 1 2 3 4 5 6 7 8 9 10
similar
center reassign reassign
10 10
K=2 9 9
8 8
Arbitrarily choose K 7 7
object as initial
6 6
5 5
2
the 3
1 cluster 1
0
0 1 2 3 4 5 6 7 8 9 10
means 0
0 1 2 3 4 5 6 7 8 9 10
Comments on the K-Means
Method
• Strength: Relatively efficient: O(tkn), where n is # objects, k
is # clusters, and t is # iterations. Normally, k, t << n.
• Comparing: PAM: O(k(n-k)2 ), CLARA: O(ks2 + k(n-k))
• Comment: Often terminates at a local optimum. The global
optimum may be found using techniques such as:
deterministic annealing and genetic algorithms
• Weakness
– Applicable only when mean is defined, then what about categorical
data?
– Need to specify k, the number of clusters, in advance
– Unable to handle noisy data and outliers
– Not suitable to discover clusters with non-convex shapes
Variations of the K-Means Method
• A few variants of the k-means which differ in
– Selection of the initial k means
– Dissimilarity calculations
9 9 9
8 8 8
Arbitrary Assign
7 7 7
6 6 6
5
choose k 5 each 5
4 object as 4 remainin 4
3
initial 3
g object 3
2
medoids 2
to 2
nearest
1 1 1
0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
medoids 0 1 2 3 4 5 6 7 8 9 10
Do loop 9
Compute
9
Swapping O
8 8
total cost of
Until no
7 7
and Oramdom 6
swapping 6
change
5 5
If quality is 4 4
improved. 3 3
2 2
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
PAM (Partitioning Around Medoids)
(1987)
9 9
8 8
t j
7 t 7
6 6
5 j 5
4 4
3 i h 3
h
2 2
i
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
9
9
8
8
7
h 7
6 j 6
5
5
i
4
i 4
h j
3 t 3
2
2
1
1
t
0
0
0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
8
10
8
10
6 6 6
5 5 5
4 4 4
3 3 3
2 2 2
1 1 1
0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
Dendrogram: Shows How the Clusters are Merged
9 9
9
8 8
8
7 7
7
6 6
6
5 5
5
4 4
4
3 3
3
2 2
2
1 1
1
0 0
0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10
Recent Hierarchical Clustering
Methods
• Major weakness of agglomerative clustering
methods
– do not scale well: time complexity of at least O(n2),
where n is the number of total objects
– can never undo what was done previously
• Integration of hierarchical with distance-based
clustering
– BIRCH (1996): uses CF-tree and incrementally adjusts
the quality of sub-clusters
– ROCK (1999): clustering categorical data by neighbor
and link analysis
Clustering High-Dimensional
Data
• Clustering high-dimensional data
– Many applications: text documents, DNA micro-array data
– Major challenges:
• Many irrelevant dimensions may mask clusters
• Distance measure becomes meaningless—due to equi-distance
• Clusters may exist only in some subspaces
• Methods
– Feature transformation: only effective if most dimensions are relevant
• PCA & SVD useful only when features are highly correlated/redundant
– Feature selection: wrapper or filter approaches
• useful to find a subspace where the data have nice clusters
– Subspace-clustering: find clusters in all the possible subspaces
• CLIQUE, ProClus, and frequent pattern-based clustering
The Curse of Dimensionality
(graphs adapted from Parsons et al. KDD
Explorations 2004)
• Data in only one dimension is
relatively packed
• Adding a dimension “stretch” the
points across that dimension,
making them further apart
• Adding more dimensions will
make the points further apart—
high dimensional data is
extremely sparse
• Distance measure becomes
Why Subspace
Clustering?
(adapted from Parsons et al. SIGKDD
• Clusters mayExplorations
exist only in some subspaces
2004)
• Subspace-clustering: find clusters in all the
subspaces
What Is Outlier Discovery?
• What are outliers?
– The set of objects are considerably dissimilar from the
remainder of the data
– Example: Sports: Michael Jordon, Wayne Gretzky, ...
• Problem: Define and find outliers in large data
sets
• Applications:
– Credit card fraud detection
– Telecom fraud detection
– Customer segmentation
– Medical analysis
Outlier Discovery:
Statistical
Approaches