0% found this document useful (0 votes)
4 views

Clustering[1]

Cluster analysis involves grouping data objects into clusters based on their similarities, with applications in various fields such as biology and marketing. The quality of clustering is determined by intra-class and inter-class similarities, and several approaches exist including partitioning, hierarchical, density-based, and model-based methods. Challenges include scalability, handling different attribute types, and ensuring interpretability, with specific algorithms like k-means and k-medoids being commonly used for partitioning.

Uploaded by

losteinhalt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Clustering[1]

Cluster analysis involves grouping data objects into clusters based on their similarities, with applications in various fields such as biology and marketing. The quality of clustering is determined by intra-class and inter-class similarities, and several approaches exist including partitioning, hierarchical, density-based, and model-based methods. Challenges include scalability, handling different attribute types, and ensuring interpretability, with specific algorithms like k-means and k-medoids being commonly used for partitioning.

Uploaded by

losteinhalt
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

CLUSTERING

1
What is Cluster Analysis?
◼ Cluster: A collection of data objects
◼ Similar to one another within the same group

◼ Dissimilar to the objects in other groups

◼ Cluster analysis (clustering, data segmentation)


◼ Finding similarities between data according to the

characteristics found in the data and grouping similar


data objects into clusters
◼ Unsupervised learning: no predefined classes
◼ Biology, Information retrieval, Land use, Marketing,
City-planning, Earth-quake studies, Climate, Economic
Science
5/11/2025 Dept of CS&SE, AU College of Engg. 2
Quality: What Is Good Clustering?

◼ A good clustering method will produce high quality


clusters
◼ high intra-class similarity
◼ low inter-class similarity
◼ The quality of a clustering method depends on
◼ the similarity measure used by the method
◼ its implementation, and
◼ Its ability to discover some or all of hidden patterns

5/11/2025 Dept of CS&SE, AU College of Engg. 3


Considerations for Cluster Analysis

◼ Partitioning criteria
◼ Single level vs. hierarchical partitioning (often,
multi-level hierarchical partitioning is desirable)

◼ Separation of clusters
◼ Exclusive (e.g., one customer belongs to only
one region) vs. non-exclusive (e.g., one
document may belong to more than one class)

5/11/2025 Dept of CS&SE, AU College of Engg. 4


◼ Similarity measure
◼ Distance-based (e.g., Euclidian, road network,
vector) vs. connectivity-based (e.g., density or
contiguity)

◼ Clustering space
◼ Full space (often when low dimensional) vs.
subspaces (often in high-dimensional
clustering)

5/11/2025 Dept of CS&SE, AU College of Engg. 5


Requirements and Challenges

◼ Scalability
◼ Clustering all the data instead of only on

samples
◼ Ability to deal with different types of attributes
◼ Numerical, binary, categorical, ordinal, linked,

and mixture of these


◼ Constraint-based clustering
◼ User may give inputs on constraints

◼ Use domain knowledge to determine input

parameters
5/11/2025 Dept of CS&SE, AU College of Engg. 6
◼ Interpretability and usability
◼ Others
◼ Discovery of clusters with arbitrary shape

◼ Ability to deal with noisy data

◼ Incremental clustering and insensitivity to input

order
◼ High dimensionality

5/11/2025 Dept of CS&SE, AU College of Engg. 7


Major Clustering Approaches (I)

◼ Partitioning approach:
◼ Construct various partitions and then evaluate them

by some criterion, e.g., minimizing the sum of


square errors
◼ Typical methods: k-means, k-medoids, CLARANS

◼ Hierarchical approach:
◼ Create a hierarchical decomposition of the set of

data (or objects) using some criterion


◼ Typical methods: Diana, Agnes, BIRCH,

CAMELEON
5/11/2025 Dept of CS&SE, AU College of Engg. 8
◼ Density-based approach:
◼ Based on connectivity and density functions

◼ Typical methods: DBSACN, OPTICS, DenClue

◼ Grid-based approach:
◼ based on a multiple-level granularity structure

◼ Typical methods: STING, WaveCluster, CLIQUE

◼ Model-based:
◼ A model is hypothesized for each of the clusters and

tries to find the best fit of that model to each other


◼ Typical methods: EM, SOM, COBWEB

5/11/2025 Dept of CS&SE, AU College of Engg. 9


Partitioning Algorithms: Basic Concept

◼ Partitioning method: Partitioning a database D of n objects into a set of


k clusters, such that the sum of squared distances is minimized (where
ci is the centroid or medoid of cluster Ci)

E =  ik=1 pCi ( p − ci ) 2
◼ Given k, find a partition of k clusters that optimizes the chosen
partitioning criterion
◼ Global optimal: exhaustively enumerate all partitions
◼ Heuristic methods: k-means and k-medoids algorithms
◼ k-means : Each cluster is represented by the center of the cluster
◼ k-medoids or PAM : Each cluster is represented by one of the
objects in the cluster

5/11/2025 Dept of CS&SE, AU College of Engg. 10


The K-Means Clustering Method

◼ Given k, the k-means algorithm is implemented in four


steps:
◼ Partition objects into k nonempty subsets
◼ Compute seed points as the centroids of the
clusters of the current partitioning (the centroid is
the center, i.e., mean point, of the cluster)
◼ Assign each object to the cluster with the nearest
seed point
◼ Go back to Step 2, stop when the assignment does
not change

5/11/2025 Dept of CS&SE, AU College of Engg. 11


An Example of K-Means Clustering

K=2

Arbitrarily Update the


partition cluster
objects into centroids
k groups

The initial data set Loop if Reassign objects


needed
◼ Partition objects into k nonempty
subsets
◼ Repeat
◼ Compute centroid (i.e., mean Update the
cluster
point) for each partition
centroids
◼ Assign each object to the
cluster of its nearest centroid
◼ Until no change
5/11/2025 Dept of CS&SE, AU College of Engg. 12
What Is the Problem of the K-Means Method?

◼ The k-means algorithm is sensitive to outliers !

◼ Since an object with an extremely large value may substantially


distort the distribution of the data

◼ K-Medoids: Instead of taking the mean value of the object in a cluster


as a reference point, medoids can be used, which is the most
centrally located object in a cluster

10 10
9 9
8 8
7 7
6 6
5 5
4 4
3 3
2 2
1 1
0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

5/11/2025 Dept of CS&SE, AU College of Engg. 13


The K-Medoid Clustering Method

◼ K-Medoids Clustering: Find representative objects (medoids) in clusters

◼ PAM (Partitioning Around Medoids)

◼ Starts from an initial set of medoids and iteratively replaces one


of the medoids by one of the non-medoids if it improves the total
distance of the resulting clustering

◼ PAM works effectively for small data sets, but does not scale
well for large data sets (due to the computational complexity)

◼ Efficiency improvement on PAM

◼ CLARA : PAM on samples

◼ CLARANS : Randomized re-sampling

5/11/2025 Dept of CS&SE, AU College of Engg. 14


PAM: A Typical K-Medoids Algorithm
Total Cost = 20
10 10 10

9 9 9

8 8 8

Arbitrary Assign
7 7 7

6 6 6

5
choose k 5 each 5

4 object as 4 remainin 4

3
initial 3
g object 3

2
medoids 2
to 2

nearest
1 1 1

0 0 0

0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
medoids 0 1 2 3 4 5 6 7 8 9 10

K=2 Randomly select a


Total Cost = 26 nonmedoid object,Oramdom
10 10

Do loop 9
Compute
9

Swapping O
8 8

total cost of
Until no
7 7

and Oramdom 6
swapping 6

change
5 5

If quality is 4 4

improved. 3 3

2 2

1 1

0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

5/11/2025 Dept of CS&SE, AU College of Engg. 15


What Is the Problem with PAM?

◼ Pam is more robust than k-means in the presence of


noise and outliers because a medoid is less influenced
by outliers or other extreme values than a mean
◼ Pam works efficiently for small data sets but does not
scale well for large data sets.

➔Sampling-based method
CLARA(Clustering LARge Applications)

5/11/2025 Dept of CS&SE, AU College of Engg. 16


Hierarchical Clustering
◼ Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input, but
needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
agglomerative
(AGNES)
a
ab
b abcde
c
cde
d
de
e
divisive
Step 4 Step 3 Step 2 Step 1 Step 0 (DIANA)
5/11/2025 Dept of CS&SE, AU College of Engg. 17
AGNES (Agglomerative Nesting)
◼ Introduced in Kaufmann and Rousseeuw (1990)
◼ Implemented in statistical packages, e.g., Splus
◼ Use the single-link method and the dissimilarity matrix
◼ Merge nodes that have the least dissimilarity
◼ Go on in a non-descending fashion
◼ Eventually all nodes belong to the same cluster
10 10 10

9 9 9

8 8 8

7 7 7

6 6 6

5 5 5

4 4 4

3 3 3

2 2 2

1 1 1

0 0 0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10

5/11/2025 Dept of CS&SE, AU College of Engg. 18


Dendrogram: Shows How Clusters are Merged

Decompose data objects into a several levels of nested


partitioning (tree of clusters), called a dendrogram

A clustering of the data objects is obtained by cutting


the dendrogram at the desired level, then each
connected component forms a cluster

19
DIANA (Divisive Analysis)

◼ Introduced in Kaufmann and Rousseeuw (1990)


◼ Implemented in statistical analysis packages, e.g., Splus
◼ Inverse order of AGNES
◼ Eventually each node forms a cluster on its own

10 10
10

9 9
9

8 8
8

7 7
7

6 6
6

5 5
5

4 4
4

3 3
3

2 2
2

1 1
1

0 0
0
0 1 2 3 4 5 6 7 8 9 10 0 1 2 3 4 5 6 7 8 9 10
0 1 2 3 4 5 6 7 8 9 10

5/11/2025 Dept of CS&SE, AU College of Engg. 20


Distance between Clusters X X

◼ Single link: smallest distance between an element in one cluster


and an element in the other, i.e., dist(Ki, Kj) = min(tip, tjq)

◼ Complete link: largest distance between an element in one cluster


and an element in the other, i.e., dist(Ki, Kj) = max(tip, tjq)

◼ Average: avg distance between an element in one cluster and an


element in the other, i.e., dist(Ki, Kj) = avg(tip, tjq)

◼ Centroid: distance between the centroids of two clusters, i.e.,


dist(Ki, Kj) = dist(Ci, Cj)

◼ Medoid: distance between the medoids of two clusters, i.e., dist(Ki,


Kj) = dist(Mi, Mj)
◼ Medoid: a chosen, centrally located object in the cluster
5/11/2025 Dept of CS&SE, AU College of Engg. 21
Centroid, Radius and Diameter of a
Cluster (for numerical data sets)
◼ Centroid: the “middle” of a cluster iN= 1(t
Cm = ip )
N

◼ Radius: square root of average distance from any point


of the cluster to its centroid  N (t − cm ) 2
Rm = i =1 ip
N
◼ Diameter: square root of average mean squared
distance between all pairs of points in the cluster

 N  N (t − t ) 2
Dm = i =1 i =1 ip iq
N ( N −1)

5/11/2025 Dept of CS&SE, AU College of Engg. 22


BIRCH (Balanced Iterative Reducing and
Clustering Using Hierarchies)
◼ Incrementally construct a CF (Clustering Feature) tree, a hierarchical
data structure for multiphase clustering
◼ Phase 1: scan DB to build an initial in-memory CF tree (a multi-level
compression of the data that tries to preserve the inherent clustering
structure of the data)
◼ Phase 2: use an arbitrary clustering algorithm to cluster the leaf
nodes of the CF-tree
◼ Scales linearly: finds a good clustering with a single scan and improves
the quality with a few additional scans
◼ Weakness: handles only numeric data, and sensitive to the order of the
data record

5/11/2025 Dept of CS&SE, AU College of Engg. 23


Clustering Feature Vector in BIRCH

Clustering Feature (CF): CF = (N, LS, SS)


N: Number of data points
N
LS: linear sum of N points:  X i
i =1

CF = (5, (16,30),(54,190))
SS: square sum of N points
N 2 10

(3,4)
 Xi
9

i =1
7

6
(2,6)
5

4 (4,5)
3

1
(4,7)
0
0 1 2 3 4 5 6 7 8 9 10
(3,8)

24
CF-Tree in BIRCH
◼ Clustering feature:
◼ Summary of the statistics for a given subcluster: the 0-th, 1st,

and 2nd moments of the subcluster from the statistical point


of view
◼ Registers crucial measurements for computing cluster and

utilizes storage efficiently


A CF tree is a height-balanced tree that stores the clustering
features for a hierarchical clustering
◼ A nonleaf node in a tree has descendants or “children”

◼ The nonleaf nodes store sums of the CFs of their children

◼ A CF tree has two parameters

◼ Branching factor: max # of children

◼ Threshold: max diameter of sub-clusters stored at the leaf

5/11/2025
nodes Dept of CS&SE, AU College of Engg. 25
The CF Tree Structure
Root

B=7 CF1 CF2 CF3 CF6


child1 child2 child3 child6
L=6

Non-leaf node
CF1 CF2 CF3 CF5
child1 child2 child3 child5

Leaf node Leaf node


prev CF1 CF2 CF6 next prev CF1 CF2 CF4 next

5/11/2025 Dept of CS&SE, AU College of Engg. 26


The Birch Algorithm
◼ Cluster Diameter 1
 ( xi − x j )
2

n( n − 1)

◼ For each point in the input


◼ Find closest leaf entry

◼ Add point to leaf entry and update CF

◼ If entry diameter > max_diameter, then split leaf, and possibly

parents
◼ Algorithm is O(n)
◼ Concerns
◼ Sensitive to insertion order of data points

◼ Since we fix the size of leaf nodes, so clusters may not be so natural

◼ Clusters tend to be spherical given the radius and diameter

measures
5/11/2025 Dept of CS&SE, AU College of Engg. 27
Density-Based Clustering Methods

◼ Clustering based on density (local cluster criterion), such


as density-connected points
◼ Major features:
◼ Discover clusters of arbitrary shape

◼ Handle noise

◼ One scan

◼ Need density parameters as termination condition

◼ Several interesting studies:


◼ DBSCAN: Ester, et al. (KDD’96)

◼ OPTICS: Ankerst, et al (SIGMOD’99).

◼ DENCLUE: Hinneburg & D. Keim (KDD’98)

◼ CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based)

5/11/2025 Dept of CS&SE, AU College of Engg. 28


Density-Based Clustering: Basic Concepts
◼ Two parameters:
◼ Eps: Maximum radius of the neighbourhood
◼ MinPts: Minimum number of points in an Eps-
neighbourhood of that point
◼ NEps(p): {q belongs to D | dist(p,q) ≤ Eps}
◼ Directly density-reachable: A point p is directly density-
reachable from a point q w.r.t. Eps, MinPts if
◼ p belongs to NEps(q)
◼ core point condition: p MinPts = 5

|NEps (q)| ≥ MinPts Eps = 1 cm


q

5/11/2025 Dept of CS&SE, AU College of Engg. 29


Density-Reachable and Density-Connected

◼ Density-reachable:
◼ A point p is density-reachable from p
a point q w.r.t. Eps, MinPts if there p1
is a chain of points p1, …, pn, p1 = q
q, pn = p such that pi+1 is directly
density-reachable from pi
◼ Density-connected
◼ A point p is density-connected to a p q
point q w.r.t. Eps, MinPts if there is
a point o such that both, p and q o
are density-reachable from o w.r.t.
Eps and MinPts
5/11/2025 Dept of CS&SE, AU College of Engg. 30
DBSCAN: Density-Based Spatial Clustering of
Applications with Noise
◼ Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
◼ Discovers clusters of arbitrary shape in spatial databases
with noise

Outlier

Border
Eps = 1cm
Core MinPts = 5

5/11/2025 Dept of CS&SE, AU College of Engg. 31


DBSCAN: The Algorithm
◼ Arbitrary select a point p

◼ Retrieve all points density-reachable from p w.r.t. Eps


and MinPts

◼ If p is a core point, a cluster is formed

◼ If p is a border point, no points are density-reachable


from p and DBSCAN visits the next point of the database

◼ Continue the process until all of the points have been


processed

5/11/2025 Dept of CS&SE, AU College of Engg. 32


OPTICS: A Cluster-Ordering Method (1999)

◼ OPTICS: Ordering Points To Identify the Clustering


Structure
◼ Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)

◼ Produces a special order of the database wrt its

density-based clustering structure


◼ This cluster-ordering contains info equiv to the density-

based clusterings corresponding to a broad range of


parameter settings
◼ Good for both automatic and interactive cluster analysis,

including finding intrinsic clustering structure


◼ Can be represented graphically or using visualization

techniques
5/11/2025 Dept of CS&SE, AU College of Engg. 33
Grid-Based Clustering Method

◼ Using multi-resolution grid data structure


◼ Several interesting methods
◼ STING (a STatistical INformation Grid approach) by

Wang, Yang and Muntz (1997)


◼ WaveCluster by Sheikholeslami, Chatterjee, and
Zhang (VLDB’98)
◼ A multi-resolution clustering approach using
wavelet method
◼ CLIQUE: Agrawal, et al. (SIGMOD’98)
◼ Both grid-based and subspace clustering

5/11/2025 Dept of CS&SE, AU College of Engg. 34


STING: A Statistical Information Grid Approach

◼ Wang, Yang and Muntz (VLDB’97)


◼ The spatial area is divided into rectangular cells
◼ There are several levels of cells corresponding to different
levels of resolution

5/11/2025 Dept of CS&SE, AU College of Engg. 35


CLIQUE (Clustering In QUEst)

◼ Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98)


◼ Automatically identifying subspaces of a high dimensional data space
that allow better clustering than original space
◼ CLIQUE can be considered as both density-based and grid-based
◼ It partitions each dimension into the same number of equal length
interval
◼ It partitions an m-dimensional data space into non-overlapping
rectangular units
◼ A unit is dense if the fraction of total data points contained in the unit
exceeds the input model parameter
◼ A cluster is a maximal set of connected dense units within a
subspace
5/11/2025 Dept of CS&SE, AU College of Engg. 36
Measuring Clustering Quality

◼ Two methods: extrinsic vs. intrinsic


◼ Extrinsic: supervised, i.e., the ground truth is available
◼ Compare a clustering against the ground truth using
certain clustering quality measure
◼ Ex. BCubed precision and recall metrics
◼ Intrinsic: unsupervised, i.e., the ground truth is unavailable
◼ Evaluate the goodness of a clustering by considering
how well the clusters are separated, and how compact
the clusters are
◼ Ex. Silhouette coefficient

5/11/2025 Dept of CS&SE, AU College of Engg. 37


37

You might also like