Chapter 10. Cluster Analysis Basic Concepts and Methods.pptSubrata Kumer Paul
Jiawei Han, Micheline Kamber and Jian Pei
Data Mining: Concepts and Techniques, 3rd ed.
The Morgan Kaufmann Series in Data Management Systems
Morgan Kaufmann Publishers, July 2011. ISBN 978-0123814791
The document discusses different clustering methods including partitioning, hierarchical, density-based, and grid-based approaches. It provides details on popular partitioning algorithms like k-means and k-medoids, describing how they work, their strengths and weaknesses. Hierarchical clustering methods like AGNES and DIANA are also covered, including how distances between clusters are calculated during the merging or splitting process.
This chapter discusses different clustering methods including partitioning, hierarchical, density-based, and grid-based approaches. Partitioning methods like k-means and k-medoids aim to partition observations into k clusters by optimizing some objective function. Hierarchical clustering builds a hierarchy of clusters based on distance between observations. Density-based methods identify clusters based on density rather than distance. Grid-based methods quantize the space into finite number of cells that form clusters.
This document summarizes clustering analysis techniques described in Chapter 10 of the book "Data Mining: Concepts and Techniques". It introduces the basic concepts of cluster analysis including partitioning, hierarchical, density-based, and grid-based methods. It then describes the k-means and k-medoids partitioning algorithms in more detail, noting that k-means can be sensitive to outliers while k-medoids uses actual data points as cluster representatives.
This document summarizes clustering analysis techniques described in Chapter 10 of the book "Data Mining: Concepts and Techniques". It introduces the basic concepts of cluster analysis including partitioning, hierarchical, density-based, and grid-based methods. It then describes the k-means and k-medoids partitioning algorithms in more detail, noting that k-means can be sensitive to outliers while k-medoids uses actual data points as cluster representatives.
This document summarizes chapter 10 of the book "Data Mining: Concepts and Techniques" which discusses cluster analysis. The chapter covers basic concepts of cluster analysis including partitioning, hierarchical, density-based and grid-based methods. It describes popular partitioning algorithms like k-means and k-medoids, and notes that k-means can be sensitive to outliers while k-medoids uses medioids which are less sensitive to outliers. The chapter also discusses evaluating clustering quality and major considerations for cluster analysis.
Cluster analysis is an unsupervised machine learning technique used to group unlabeled data points into clusters based on similarities. It involves finding groups of objects such that objects within a cluster are more similar to each other than objects in different clusters. The key goals of cluster analysis are to maximize intra-cluster similarity while minimizing inter-cluster similarity. Common applications of cluster analysis include market segmentation, document classification, and identifying homogeneous groups in biological data.
This document summarizes Chapter 10 of the book "Data Mining: Concepts and Techniques (3rd ed.)" which covers cluster analysis. The chapter introduces different types of clustering methods including partitioning methods like k-means and k-medoids, hierarchical methods, density-based methods, and grid-based methods. It discusses how to evaluate the quality of clustering results and highlights considerations for cluster analysis such as similarity measures, clustering space, and challenges like scalability and high dimensionality.
The document describes different types of clustering algorithms, including partitioning, hierarchical, density-based, and grid-based methods. Partitioning methods like k-means and k-medoids aim to partition objects into k clusters by optimizing an objective function. Hierarchical clustering builds a hierarchy of clusters based on distance, either through an agglomerative (bottom-up) or divisive (top-down) approach. Density-based methods identify clusters based on density rather than distance. Grid-based methods quantize the space into a finite number of cells that form a grid structure.
This document provides an overview of hierarchical clustering methods. It discusses agglomerative hierarchical clustering methods like AGNES that start by treating each object as a separate cluster and merge them into larger clusters. It also discusses divisive hierarchical clustering methods like DIANA that start with all objects in one cluster and split them into smaller clusters. A dendrogram is used to visualize the nested clustering formed at different levels in the hierarchical clustering tree. The document also discusses different measures for calculating the distance between clusters during the merging or splitting process.
The document discusses various clustering methods used in data mining. It describes partitioning methods like k-means and k-medoids which group data into a set number of clusters based on distance between data points. Hierarchical clustering creates nested clusters based on distance metrics. Density-based methods find clusters based on connectivity and density. Model-based clustering fits a model to each cluster.
Cluster analysis, or clustering, is the process of grouping data objects into subsets called clusters so that objects within a cluster are similar to each other but dissimilar to objects in other clusters. There are several approaches to clustering, including partitioning, hierarchical, density-based, and grid-based methods. The k-means and k-medoids algorithms are popular partitioning methods that aim to partition observations into k clusters by minimizing distances between observations and cluster centroids or medoids. K-medoids is more robust to outliers as it uses actual observations as cluster representatives rather than centroids. Both methods require specifying the number of clusters k in advance.
Cluster analysis is an unsupervised learning technique used to group unlabeled data points into meaningful clusters. There are several approaches to cluster analysis including partitioning methods like k-means, hierarchical clustering methods like agglomerative nesting (AGNES), and density-based methods like DBSCAN. The quality of clusters is evaluated based on intra-cluster similarity and inter-cluster dissimilarity. Cluster analysis has applications in fields like pattern recognition, image processing, and market segmentation.
The document discusses various model-based clustering techniques for handling high-dimensional data, including expectation-maximization, conceptual clustering using COBWEB, self-organizing maps, subspace clustering with CLIQUE and PROCLUS, and frequent pattern-based clustering. It provides details on the methodology and assumptions of each technique.
Unsupervised learning Algorithms and Assumptionsrefedey275
Topics :
Introduction to unsupervised learning
Unsupervised learning Algorithms and Assumptions
K-Means algorithm – introduction
Implementation of K-means algorithm
Hierarchical Clustering – need and importance of hierarchical clustering
Agglomerative Hierarchical Clustering
Working of dendrogram
Steps for implementation of AHC using Python
Gaussian Mixture Models – Introduction, importance and need of the model
Normal , Gaussian distribution
Implementation of Gaussian mixture model
Understand the different distance metrics used in clustering
Euclidean, Manhattan, Cosine, Mahala Nobis
Features of a Cluster – Labels, Centroids, Inertia, Eigen vectors and Eigen values
Principal component analysis
Supervised learning (classification)
Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations
New data is classified based on the training set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data
Types of Hierarchical Clustering
There are mainly two types of hierarchical clustering:
Agglomerative hierarchical clustering
Divisive Hierarchical clustering
A distribution in statistics is a function that shows the possible values for a variable and how often they occur.
In probability theory and statistics, the Normal Distribution, also called the Gaussian Distribution.
is the most significant continuous probability distribution.
Sometimes it is also called a bell curve.
Step by step operations by which we make a group of objects in which attributes
of all the objects are nearly similar, known as clustering. So, a cluster is a collection of
objects that acquire nearly same attribute values. The property of an object in a cluster is
similar to other objects in same cluster but different with objects of other clusters.
Clustering is used in wide range of applications like pattern recognition, image processing,
data analysis, machine learning etc. Nowadays, more attention has been put on categorical
data rather than numerical data. Where, the range of numerical attributes organizes in a
class like small, medium, high, and so on. There is wide range of algorithm that used to
make clusters of given categorical data. Our approach is to enhance the working on well-
known clustering algorithm k-modes to improve accuracy of algorithm. We proposed a new
approach named “High Accuracy Clustering Algorithm for Categorical datasets”.
The document discusses the concept of clustering, which is an unsupervised machine learning technique used to group unlabeled data points that are similar. It describes how clustering algorithms aim to identify natural groups within data based on some measure of similarity, without any labels provided. The key types of clustering are partition-based (like k-means), hierarchical, density-based, and model-based. Applications include marketing, earth science, insurance, and more. Quality measures for clustering include intra-cluster similarity and inter-cluster dissimilarity.
Cluster analysis is an unsupervised learning technique used to group similar objects together. It identifies clusters of objects such that objects within a cluster are more closely related to each other than objects in different clusters. Common applications of cluster analysis include document clustering, market segmentation, and identifying types of customers or animals. Popular clustering algorithms include k-means, k-medoids, hierarchical clustering, density-based clustering, and grid-based clustering.
This document discusses various clustering methods used in data mining. It begins with an overview of clustering and its applications. It then describes five major categories of clustering methods: partitioning methods like k-means and k-medoids, hierarchical methods like agglomerative nesting and divisive analysis, density-based methods, grid-based methods, and model-based clustering methods. For each category, popular algorithms are provided as examples. The document also covers types of data for clustering and evaluating clustering results.
This document provides an overview of clustering techniques. It introduces partitioning methods like k-means and k-medoids, hierarchical methods including agglomerative and divisive approaches, model-based methods using mixtures of Gaussians and expectation-maximization, and density-based techniques such as DBSCAN. It discusses applications of clustering, evaluates clustering quality, and covers requirements and challenges for clustering large datasets.
Cluster analysis is an unsupervised machine learning technique used to group unlabeled data points into clusters based on similarities. It involves finding groups of objects such that objects within a cluster are more similar to each other than objects in different clusters. The key goals of cluster analysis are to maximize intra-cluster similarity while minimizing inter-cluster similarity. Common applications of cluster analysis include market segmentation, document classification, and identifying homogeneous groups in biological data.
This document summarizes Chapter 10 of the book "Data Mining: Concepts and Techniques (3rd ed.)" which covers cluster analysis. The chapter introduces different types of clustering methods including partitioning methods like k-means and k-medoids, hierarchical methods, density-based methods, and grid-based methods. It discusses how to evaluate the quality of clustering results and highlights considerations for cluster analysis such as similarity measures, clustering space, and challenges like scalability and high dimensionality.
The document describes different types of clustering algorithms, including partitioning, hierarchical, density-based, and grid-based methods. Partitioning methods like k-means and k-medoids aim to partition objects into k clusters by optimizing an objective function. Hierarchical clustering builds a hierarchy of clusters based on distance, either through an agglomerative (bottom-up) or divisive (top-down) approach. Density-based methods identify clusters based on density rather than distance. Grid-based methods quantize the space into a finite number of cells that form a grid structure.
This document provides an overview of hierarchical clustering methods. It discusses agglomerative hierarchical clustering methods like AGNES that start by treating each object as a separate cluster and merge them into larger clusters. It also discusses divisive hierarchical clustering methods like DIANA that start with all objects in one cluster and split them into smaller clusters. A dendrogram is used to visualize the nested clustering formed at different levels in the hierarchical clustering tree. The document also discusses different measures for calculating the distance between clusters during the merging or splitting process.
The document discusses various clustering methods used in data mining. It describes partitioning methods like k-means and k-medoids which group data into a set number of clusters based on distance between data points. Hierarchical clustering creates nested clusters based on distance metrics. Density-based methods find clusters based on connectivity and density. Model-based clustering fits a model to each cluster.
Cluster analysis, or clustering, is the process of grouping data objects into subsets called clusters so that objects within a cluster are similar to each other but dissimilar to objects in other clusters. There are several approaches to clustering, including partitioning, hierarchical, density-based, and grid-based methods. The k-means and k-medoids algorithms are popular partitioning methods that aim to partition observations into k clusters by minimizing distances between observations and cluster centroids or medoids. K-medoids is more robust to outliers as it uses actual observations as cluster representatives rather than centroids. Both methods require specifying the number of clusters k in advance.
Cluster analysis is an unsupervised learning technique used to group unlabeled data points into meaningful clusters. There are several approaches to cluster analysis including partitioning methods like k-means, hierarchical clustering methods like agglomerative nesting (AGNES), and density-based methods like DBSCAN. The quality of clusters is evaluated based on intra-cluster similarity and inter-cluster dissimilarity. Cluster analysis has applications in fields like pattern recognition, image processing, and market segmentation.
The document discusses various model-based clustering techniques for handling high-dimensional data, including expectation-maximization, conceptual clustering using COBWEB, self-organizing maps, subspace clustering with CLIQUE and PROCLUS, and frequent pattern-based clustering. It provides details on the methodology and assumptions of each technique.
Unsupervised learning Algorithms and Assumptionsrefedey275
Topics :
Introduction to unsupervised learning
Unsupervised learning Algorithms and Assumptions
K-Means algorithm – introduction
Implementation of K-means algorithm
Hierarchical Clustering – need and importance of hierarchical clustering
Agglomerative Hierarchical Clustering
Working of dendrogram
Steps for implementation of AHC using Python
Gaussian Mixture Models – Introduction, importance and need of the model
Normal , Gaussian distribution
Implementation of Gaussian mixture model
Understand the different distance metrics used in clustering
Euclidean, Manhattan, Cosine, Mahala Nobis
Features of a Cluster – Labels, Centroids, Inertia, Eigen vectors and Eigen values
Principal component analysis
Supervised learning (classification)
Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations
New data is classified based on the training set
Unsupervised learning (clustering)
The class labels of training data is unknown
Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data
Types of Hierarchical Clustering
There are mainly two types of hierarchical clustering:
Agglomerative hierarchical clustering
Divisive Hierarchical clustering
A distribution in statistics is a function that shows the possible values for a variable and how often they occur.
In probability theory and statistics, the Normal Distribution, also called the Gaussian Distribution.
is the most significant continuous probability distribution.
Sometimes it is also called a bell curve.
Step by step operations by which we make a group of objects in which attributes
of all the objects are nearly similar, known as clustering. So, a cluster is a collection of
objects that acquire nearly same attribute values. The property of an object in a cluster is
similar to other objects in same cluster but different with objects of other clusters.
Clustering is used in wide range of applications like pattern recognition, image processing,
data analysis, machine learning etc. Nowadays, more attention has been put on categorical
data rather than numerical data. Where, the range of numerical attributes organizes in a
class like small, medium, high, and so on. There is wide range of algorithm that used to
make clusters of given categorical data. Our approach is to enhance the working on well-
known clustering algorithm k-modes to improve accuracy of algorithm. We proposed a new
approach named “High Accuracy Clustering Algorithm for Categorical datasets”.
The document discusses the concept of clustering, which is an unsupervised machine learning technique used to group unlabeled data points that are similar. It describes how clustering algorithms aim to identify natural groups within data based on some measure of similarity, without any labels provided. The key types of clustering are partition-based (like k-means), hierarchical, density-based, and model-based. Applications include marketing, earth science, insurance, and more. Quality measures for clustering include intra-cluster similarity and inter-cluster dissimilarity.
Cluster analysis is an unsupervised learning technique used to group similar objects together. It identifies clusters of objects such that objects within a cluster are more closely related to each other than objects in different clusters. Common applications of cluster analysis include document clustering, market segmentation, and identifying types of customers or animals. Popular clustering algorithms include k-means, k-medoids, hierarchical clustering, density-based clustering, and grid-based clustering.
This document discusses various clustering methods used in data mining. It begins with an overview of clustering and its applications. It then describes five major categories of clustering methods: partitioning methods like k-means and k-medoids, hierarchical methods like agglomerative nesting and divisive analysis, density-based methods, grid-based methods, and model-based clustering methods. For each category, popular algorithms are provided as examples. The document also covers types of data for clustering and evaluating clustering results.
This document provides an overview of clustering techniques. It introduces partitioning methods like k-means and k-medoids, hierarchical methods including agglomerative and divisive approaches, model-based methods using mixtures of Gaussians and expectation-maximization, and density-based techniques such as DBSCAN. It discusses applications of clustering, evaluates clustering quality, and covers requirements and challenges for clustering large datasets.
Computer Graphics: Application of Computer Graphics.
OpenGL: Introduction to OpenGL,coordinate reference frames, specifying two-dimensional world coordinate
reference frames in OpenGL, OpenGL point functions, OpenGL line functions, point attributes, line attributes,
curve attributes, OpenGL fill area functions, OpenGL Vertex arrays, Line drawing algorithm- Bresenham'S
Introduction to ANN, McCulloch Pitts Neuron, Perceptron and its Learning
Algorithm, Sigmoid Neuron, Activation Functions: Tanh, ReLu Multi- layer Perceptron
Model – Introduction, learning parameters: Weight and Bias, Loss function: Mean
Square Error, Back Propagation Learning Convolutional Neural Network, Building
blocks of CNN, Transfer Learning, R-CNN,Auto encoders, LSTM Networks, Recent
Trends in Deep Learning.
an insightful lecture on "Loads on Structure," where we delve into the fundamental concepts and principles of load analysis in structural engineering. This presentation covers various types of loads, including dead loads, live loads, as well as their impact on building design and safety. Whether you are a student, educator, or professional in the field, this lecture will enhance your understanding of ensuring stability. Explore real-world examples and best practices that are essential for effective engineering solutions.
A lecture by Eng. Wael Almakinachi, M.Sc.
Efficient Algorithms for Isogeny Computation on Hyperelliptic Curves: Their A...IJCNCJournal
We present efficient algorithms for computing isogenies between hyperelliptic curves, leveraging higher genus curves to enhance cryptographic protocols in the post-quantum context. Our algorithms reduce the computational complexity of isogeny computations from O(g4) to O(g3) operations for genus 2 curves, achieving significant efficiency gains over traditional elliptic curve methods. Detailed pseudocode and comprehensive complexity analyses demonstrate these improvements both theoretically and empirically. Additionally, we provide a thorough security analysis, including proofs of resistance to quantum attacks such as Shor's and Grover's algorithms. Our findings establish hyperelliptic isogeny-based cryptography as a promising candidate for secure and efficient post-quantum cryptographic systems.
This slide deck presents a detailed overview of the 2025 survey paper titled “A Survey of Personalized Large Language Models” by Liu et al. It explores how foundation models like GPT and LLaMA can be personalized to better reflect user-specific needs, preferences, and behaviors.
The presentation is structured around a 3-level taxonomy introduced in the paper:
Input-Level Personalization (e.g., user-profile prompting, memory retrieval)
Model-Level Personalization (e.g., LoRA, PEFT, adapters)
Objective-Level Personalization (e.g., RLHF, preference alignment)
PRIZ Academy - Functional Modeling In Action with PRIZ.pdfPRIZ Guru
This PRIZ Academy deck walks you step-by-step through Functional Modeling in Action, showing how Subject-Action-Object (SAO) analysis pinpoints critical functions, ranks harmful interactions, and guides fast, focused improvements. You’ll see:
Core SAO concepts and scoring logic
A wafer-breakage case study that turns theory into practice
A live PRIZ Platform demo that builds the model in minutes
Ideal for engineers, QA managers, and innovation leads who need clearer system insight and faster root-cause fixes. Dive in, map functions, and start improving what really matters.
This research is oriented towards exploring mode-wise corridor level travel-time estimation using Machine learning techniques such as Artificial Neural Network (ANN) and Support Vector Machine (SVM). Authors have considered buses (equipped with in-vehicle GPS) as the probe vehicles and attempted to calculate the travel-time of other modes such as cars along a stretch of arterial roads. The proposed study considers various influential factors that affect travel time such as road geometry, traffic parameters, location information from the GPS receiver and other spatiotemporal parameters that affect the travel-time. The study used a segment modeling method for segregating the data based on identified bus stop locations. A k-fold cross-validation technique was used for determining the optimum model parameters to be used in the ANN and SVM models. The developed models were tested on a study corridor of 59.48 km stretch in Mumbai, India. The data for this study were collected for a period of five days (Monday-Friday) during the morning peak period (from 8.00 am to 11.00 am). Evaluation scores such as MAPE (mean absolute percentage error), MAD (mean absolute deviation) and RMSE (root mean square error) were used for testing the performance of the models. The MAPE values for ANN and SVM models are 11.65 and 10.78 respectively. The developed model is further statistically validated using the Kolmogorov-Smirnov test. The results obtained from these tests proved that the proposed model is statistically valid.
Dear SICPA Team,
Please find attached a document outlining my professional background and experience.
I remain at your disposal should you have any questions or require further information.
Best regards,
Fabien Keller
Interfacing PMW3901 Optical Flow Sensor with ESP32CircuitDigest
Learn how to connect a PMW3901 Optical Flow Sensor with an ESP32 to measure surface motion and movement without GPS! This project explains how to set up the sensor using SPI communication, helping create advanced robotics like autonomous drones and smart robots.
3. 3
What is Cluster Analysis?
Cluster: A collection of data objects
similar (or related) to one another within the same group
dissimilar (or unrelated) to the objects in other groups
Cluster analysis (or clustering, data segmentation, …)
Finding similarities between data according to the
characteristics found in the data and grouping similar
data objects into clusters
Unsupervised learning: no predefined classes (i.e., learning
by observations vs. learning by examples: supervised)
Typical applications
As a stand-alone tool to get insight into data distribution
As a preprocessing step for other algorithms
4. 4
Clustering for Data Understanding and
Applications
Biology: taxonomy of living things: kingdom, phylum, class, order,
family, genus and species
Information retrieval: document clustering
Land use: Identification of areas of similar land use in an earth
observation database
Marketing: Help marketers discover distinct groups in their customer
bases, and then use this knowledge to develop targeted marketing
programs
City-planning: Identifying groups of houses according to their house
type, value, and geographical location
Earth-quake studies: Observed earth quake epicenters should be
clustered along continent faults
Climate: understanding earth climate, find patterns of atmospheric
and ocean
Economic Science: market resarch
5. 5
Clustering as a Preprocessing Tool (Utility)
Summarization:
Preprocessing for regression, PCA, classification, and
association analysis
Compression:
Image processing: vector quantization
Finding K-nearest Neighbors
Localizing search to one or a small number of clusters
Outlier detection
Outliers are often viewed as those “far away” from any
cluster
6. Quality: What Is Good Clustering?
A good clustering method will produce high quality
clusters
high intra-class similarity: cohesive within clusters
low inter-class similarity: distinctive between clusters
The quality of a clustering method depends on
the similarity measure used by the method
its implementation, and
Its ability to discover some or all of the hidden patterns
6
7. Measure the Quality of Clustering
Dissimilarity/Similarity metric
Similarity is expressed in terms of a distance function,
typically metric: d(i, j)
The definitions of distance functions are usually rather
different for interval-scaled, boolean, categorical, ordinal
ratio, and vector variables
Weights should be associated with different variables
based on applications and data semantics
Quality of clustering:
There is usually a separate “quality” function that
measures the “goodness” of a cluster.
It is hard to define “similar enough” or “good enough”
The answer is typically highly subjective
7
8. Considerations for Cluster Analysis
Partitioning criteria
Single level vs. hierarchical partitioning (often, multi-level
hierarchical partitioning is desirable)
Separation of clusters
Exclusive (e.g., one customer belongs to only one region) vs.
non-exclusive (e.g., one document may belong to more than one
class)
Similarity measure
Distance-based (e.g., Euclidian, road network, vector) vs.
connectivity-based (e.g., density or contiguity)
Clustering space
Full space (often when low dimensional) vs. subspaces (often in
high-dimensional clustering)
8
9. Requirements and Challenges
Scalability
Clustering all the data instead of only on samples
Ability to deal with different types of attributes
Numerical, binary, categorical, ordinal, linked, and mixture of these
Constraint-based clustering
User may give inputs on constraints
Use domain knowledge to determine input parameters
Interpretability and usability
Others
Discovery of clusters with arbitrary shape
Ability to deal with noisy data
Incremental clustering and insensitivity to input order
High dimensionality
9
10. Major Clustering Approaches (I)
Partitioning approach:
Construct various partitions and then evaluate them by some
criterion, e.g., minimizing the sum of square errors
Typical methods: k-means, k-medoids, CLARANS
Hierarchical approach:
Create a hierarchical decomposition of the set of data (or objects)
using some criterion
Typical methods: Diana, Agnes, BIRCH, CAMELEON
Density-based approach:
Based on connectivity and density functions
Typical methods: DBSACN, OPTICS, DenClue
Grid-based approach:
based on a multiple-level granularity structure
Typical methods: STING, WaveCluster, CLIQUE
10
11. Major Clustering Approaches (II)
Model-based:
A model is hypothesized for each of the clusters and tries to find
the best fit of that model to each other
Typical methods: EM, SOM, COBWEB
Frequent pattern-based:
Based on the analysis of frequent patterns
Typical methods: p-Cluster
User-guided or constraint-based:
Clustering by considering user-specified or application-specific
constraints
Typical methods: COD (obstacles), constrained clustering
Link-based clustering:
Objects are often linked together in various ways
Massive links can be used to cluster objects: SimRank, LinkClus
11
13. Partitioning Algorithms: Basic Concept
Partitioning method: Partitioning a database D of n objects into a set of k
clusters, such that the sum of squared distances is minimized (where ci is
the centroid or medoid of cluster Ci)
Given k, find a partition of k clusters that optimizes the chosen partitioning
criterion
Global optimal: exhaustively enumerate all partitions
Heuristic methods: k-means and k-medoids algorithms
k-means (MacQueen’67, Lloyd’57/’82): Each cluster is represented by
the center of the cluster
k-medoids or PAM (Partition around medoids) (Kaufman &
Rousseeuw’87): Each cluster is represented by one of the objects in
the cluster
2
1 )
( i
C
p
k
i c
p
E i
13
14. The K-Means Clustering Method
Given k, the k-means algorithm is implemented in four
steps:
Partition objects into k nonempty subsets
Compute seed points as the centroids of the
clusters of the current partitioning (the centroid is
the center, i.e., mean point, of the cluster)
Assign each object to the cluster with the nearest
seed point
Go back to Step 2, stop when the assignment does
not change
14
15. An Example of K-Means Clustering
K=2
Arbitrarily
partition
objects
into k
groups
Update
the cluster
centroids
Update
the cluster
centroids
Reassign objects
Loop if
needed
15
The initial data
set
Partition objects into k nonempty
subsets
Repeat
Compute centroid (i.e., mean
point) for each partition
Assign each object to the
cluster of its nearest centroid
Until no change
16. Comments on the K-Means Method
Strength: Efficient: O(tkn), where n is # objects, k is # clusters, and t is #
iterations. Normally, k, t << n.
Comparing: PAM: O(k(n-k)2
), CLARA: O(ks2
+ k(n-k))
Comment: Often terminates at a local optimal.
Weakness
Applicable only to objects in a continuous n-dimensional space
Using the k-modes method for categorical data
In comparison, k-medoids can be applied to a wide range of data
Need to specify k, the number of clusters, in advance (there are
ways to automatically determine the best k (see Hastie et al., 2009)
Sensitive to noisy data and outliers
Not suitable to discover clusters with non-convex shapes
16
17. Variations of the K-Means Method
Most of the variants of the k-means which differ in
Selection of the initial k means
Dissimilarity calculations
Strategies to calculate cluster means
Handling categorical data: k-modes
Replacing means of clusters with modes
Using new dissimilarity measures to deal with categorical objects
Using a frequency-based method to update modes of clusters
A mixture of categorical and numerical data: k-prototype method
17
18. What Is the Problem of the K-Means Method?
The k-means algorithm is sensitive to outliers !
Since an object with an extremely large value may substantially
distort the distribution of the data
K-Medoids: Instead of taking the mean value of the object in a cluster
as a reference point, medoids can be used, which is the most
centrally located object in a cluster
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
18
19. 19
PAM: A Typical K-Medoids Algorithm
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 20
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrar
y choose
k object
as initial
medoids
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Assign
each
remaini
ng
object to
nearest
medoids
Randomly select a
nonmedoid
object,Oramdom
Compute
total cost
of
swapping
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 26
Swapping O
and Oramdom
If quality is
improved.
Do loop
Until no
change
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
20. The K-Medoid Clustering Method
K-Medoids Clustering: Find representative objects (medoids) in clusters
PAM (Partitioning Around Medoids, Kaufmann & Rousseeuw 1987)
Starts from an initial set of medoids and iteratively replaces one
of the medoids by one of the non-medoids if it improves the total
distance of the resulting clustering
PAM works effectively for small data sets, but does not scale well
for large data sets (due to the computational complexity)
Efficiency improvement on PAM
CLARA (Kaufmann & Rousseeuw, 1990): PAM on samples
CLARANS (Ng & Han, 1994): Randomized re-sampling
20
22. Hierarchical Clustering
Use distance matrix as clustering criteria. This method
does not require the number of clusters k as an input, but
needs a termination condition
Step 0 Step 1 Step 2 Step 3 Step 4
b
d
c
e
a
a b
d e
c d e
a b c d e
Step 4 Step 3 Step 2 Step 1 Step 0
agglomerative
(AGNES)
divisive
(DIANA)
22
23. AGNES (Agglomerative Nesting)
Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical packages, e.g., Splus
Use the single-link method and the dissimilarity matrix
Merge nodes that have the least dissimilarity
Go on in a non-descending fashion
Eventually all nodes belong to the same cluster
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
23
24. Dendrogram: Shows How Clusters are Merged
Decompose data objects into a several levels of nested
partitioning (tree of clusters), called a dendrogram
A clustering of the data objects is obtained by cutting
the dendrogram at the desired level, then each
connected component forms a cluster
24
25. DIANA (Divisive Analysis)
Introduced in Kaufmann and Rousseeuw (1990)
Implemented in statistical analysis packages, e.g., Splus
Inverse order of AGNES
Eventually each node forms a cluster on its own
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
25
26. Distance between Clusters
Single link: smallest distance between an element in one cluster and an
element in the other, i.e., dist(Ki, Kj) = min(tip, tjq)
Complete link: largest distance between an element in one cluster and
an element in the other, i.e., dist(Ki, Kj) = max(tip, tjq)
Average: avg distance between an element in one cluster and an
element in the other, i.e., dist(Ki, Kj) = avg(tip, tjq)
Centroid: distance between the centroids of two clusters, i.e., dist(Ki, Kj)
= dist(Ci, Cj)
Medoid: distance between the medoids of two clusters, i.e., dist(Ki, Kj) =
dist(Mi, Mj)
Medoid: a chosen, centrally located object in the cluster
X X
26
27. Centroid, Radius and Diameter of a
Cluster (for numerical data sets)
Centroid: the “middle” of a cluster
Radius: square root of average distance from any point
of the cluster to its centroid
Diameter: square root of average mean squared
distance between all pairs of points in the cluster
N
t
N
i ip
m
C
)
(
1
N
m
c
ip
t
N
i
m
R
2
)
(
1
)
1
(
2
)
(
1
1
N
N
iq
t
ip
t
N
i
N
i
m
D
27
28. Extensions to Hierarchical Clustering
Major weakness of agglomerative clustering methods
Can never undo what was done previously
Do not scale well: time complexity of at least O(n2
), where n
is the number of total objects
Integration of hierarchical & distance-based clustering
BIRCH (1996): uses CF-tree and incrementally adjusts the
quality of sub-clusters
CHAMELEON (1999): hierarchical clustering using
dynamic modeling
28
29. BIRCH (Balanced Iterative Reducing and
Clustering Using Hierarchies)
Zhang, Ramakrishnan & Livny, SIGMOD’96
Incrementally construct a CF (Clustering Feature) tree, a hierarchical
data structure for multiphase clustering
Phase 1: scan DB to build an initial in-memory CF tree (a multi-level
compression of the data that tries to preserve the inherent clustering
structure of the data)
Phase 2: use an arbitrary clustering algorithm to cluster the leaf
nodes of the CF-tree
Scales linearly: finds a good clustering with a single scan and improves
the quality with a few additional scans
Weakness: handles only numeric data, and sensitive to the order of the
data record
29
30. Clustering Feature Vector in BIRCH
Clustering Feature (CF): CF = (N, LS, SS)
N: Number of data points
LS: linear sum of N points:
SS: square sum of N points
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
CF = (5, (16,30),(54,190))
(3,4)
(2,6)
(4,5)
(4,7)
(3,8)
N
i
i
X
1
2
1
N
i
i
X
30
31. CF-Tree in BIRCH
Clustering feature:
Summary of the statistics for a given subcluster: the 0-th, 1st,
and 2nd moments of the subcluster from the statistical point of
view
Registers crucial measurements for computing cluster and
utilizes storage efficiently
A CF tree is a height-balanced tree that stores the clustering
features for a hierarchical clustering
A nonleaf node in a tree has descendants or “children”
The nonleaf nodes store sums of the CFs of their children
A CF tree has two parameters
Branching factor: max # of children
Threshold: max diameter of sub-clusters stored at the leaf
nodes
31
32. The CF Tree Structure
CF1
child1
CF3
child3
CF2
child2
CF6
child6
CF1
child1
CF3
child3
CF2
child2
CF5
child5
CF1 CF2 CF6
prev next CF1 CF2 CF4
prev next
B = 7
L = 6
Root
Non-leaf node
Leaf node Leaf node
32
33. The Birch Algorithm
Cluster Diameter
For each point in the input
Find closest leaf entry
Add point to leaf entry and update CF
If entry diameter > max_diameter, then split leaf, and possibly parents
Algorithm is O(n)
Concerns
Sensitive to insertion order of data points
Since we fix the size of leaf nodes, so clusters may not be so natural
Clusters tend to be spherical given the radius and diameter measures
2
)
(
)
1
(
1
j
x
i
x
n
n
33
34. CHAMELEON: Hierarchical Clustering Using
Dynamic Modeling (1999)
CHAMELEON: G. Karypis, E. H. Han, and V. Kumar, 1999
Measures the similarity based on a dynamic model
Two clusters are merged only if the interconnectivity and
closeness (proximity) between two clusters are high
relative to the internal interconnectivity of the clusters
and closeness of items within the clusters
Graph-based, and a two-phase algorithm
1. Use a graph-partitioning algorithm: cluster objects into a
large number of relatively small sub-clusters
2. Use an agglomerative hierarchical clustering algorithm:
find the genuine clusters by repeatedly combining these
sub-clusters
34
35. Overall Framework of CHAMELEON
Construct (K-NN)
Sparse Graph Partition the Graph
Merge Partition
Final Clusters
Data Set
K-NN Graph
P and q are connected if
q is among the top k
closest neighbors of p
Relative interconnectivity:
connectivity of c1 and c2
over internal connectivity
Relative closeness:
closeness of c1 and c2 over
internal closeness 35
37. Probabilistic Hierarchical Clustering
Algorithmic hierarchical clustering
Nontrivial to choose a good distance measure
Hard to handle missing attribute values
Optimization goal not clear: heuristic, local search
Probabilistic hierarchical clustering
Use probabilistic models to measure distances between clusters
Generative model: Regard the set of data objects to be clustered as
a sample of the underlying data generation mechanism to be
analyzed
Easy to understand, same efficiency as algorithmic agglomerative
clustering method, can handle partially observed data
In practice, assume the generative models adopt common distributions
functions, e.g., Gaussian distribution or Bernoulli distribution, governed
by parameters
37
38. Generative Model
Given a set of 1-D points X = {x1, …, xn} for clustering
analysis & assuming they are generated by a
Gaussian distribution:
The probability that a point xi ∈ X is generated by the
model
The likelihood that X is generated by the model:
The task of learning the generative model: find the
parameters μ and σ2
such that
the maximum
likelihood
38
39. A Probabilistic Hierarchical Clustering Algorithm
For a set of objects partitioned into m clusters C1, . . . ,Cm, the quality can
be measured by,
where P() is the maximum likelihood
Distance between clusters C1 and C2:
Algorithm: Progressively merge points and clusters
Input: D = {o1, ..., on}: a data set containing n objects
Output: A hierarchy of clusters
Method
Create a cluster for each object Ci = {oi}, 1 ≤ i ≤ n;
For i = 1 to n {
Find pair of clusters Ci and Cj such that
Ci,Cj = argmaxi ≠ j {log (P(Ci C
∪ j )/(P(Ci)P(Cj ))};
If log (P(Ci C
∪ j )/(P(Ci)P(Cj )) > 0 then merge Ci and Cj }
39
41. Density-Based Clustering Methods
Clustering based on density (local cluster criterion), such as
density-connected points
Major features:
Discover clusters of arbitrary shape
Handle noise
One scan
Need density parameters as termination condition
Several interesting studies:
DBSCAN: Ester, et al. (KDD’96)
OPTICS: Ankerst, et al (SIGMOD’99).
DENCLUE: Hinneburg & D. Keim (KDD’98)
CLIQUE: Agrawal, et al. (SIGMOD’98) (more grid-based)
41
42. Density-Based Clustering: Basic Concepts
Two parameters:
Eps: Maximum radius of the neighbourhood
MinPts: Minimum number of points in an Eps-
neighbourhood of that point
NEps(p): {q belongs to D | dist(p,q) ≤ Eps}
Directly density-reachable: A point p is directly density-
reachable from a point q w.r.t. Eps, MinPts if
p belongs to NEps(q)
core point condition:
|NEps (q)| ≥ MinPts
MinPts = 5
Eps = 1 cm
p
q
42
43. Density-Reachable and Density-Connected
Density-reachable:
A point p is density-reachable from a
point q w.r.t. Eps, MinPts if there is a
chain of points p1, …, pn, p1 = q, pn =
p such that pi+1 is directly density-
reachable from pi
Density-connected
A point p is density-connected to a
point q w.r.t. Eps, MinPts if there is a
point o such that both, p and q are
density-reachable from o w.r.t. Eps
and MinPts
p
q
p1
p q
o
43
44. DBSCAN: Density-Based Spatial Clustering of
Applications with Noise
Relies on a density-based notion of cluster: A cluster is
defined as a maximal set of density-connected points
Discovers clusters of arbitrary shape in spatial databases
with noise
Core
Border
Outlier
Eps = 1cm
MinPts = 5
44
45. DBSCAN: The Algorithm
Arbitrary select a point p
Retrieve all points density-reachable from p w.r.t. Eps and
MinPts
If p is a core point, a cluster is formed
If p is a border point, no points are density-reachable
from p and DBSCAN visits the next point of the database
Continue the process until all of the points have been
processed
45
47. OPTICS: A Cluster-Ordering Method (1999)
OPTICS: Ordering Points To Identify the Clustering
Structure
Ankerst, Breunig, Kriegel, and Sander (SIGMOD’99)
Produces a special order of the database wrt its
density-based clustering structure
This cluster-ordering contains info equiv to the density-
based clusterings corresponding to a broad range of
parameter settings
Good for both automatic and interactive cluster
analysis, including finding intrinsic clustering structure
Can be represented graphically or using visualization
techniques
47
48. OPTICS: Some Extension from DBSCAN
Index-based:
k = number of dimensions
N = 20
p = 75%
M = N(1-p) = 5
Complexity: O(NlogN)
Core Distance:
min eps s.t. point is core
Reachability Distance
D
p2
MinPts = 5
= 3 cm
Max (core-distance (o), d (o, p))
r(p1, o) = 2.8cm. r(p2,o) = 4cm
o
o
p1
48
51. DENCLUE: Using Statistical Density Functions
DENsity-based CLUstEring by Hinneburg & Keim (KDD’98)
Using statistical density functions:
Major features
Solid mathematical foundation
Good for data sets with large amounts of noise
Allows a compact mathematical description of arbitrarily shaped
clusters in high-dimensional data sets
Significant faster than existing algorithm (e.g., DBSCAN)
But needs a large number of parameters
f x y e
Gaussian
d x y
( , )
( , )
2
2
2
N
i
x
x
d
D
Gaussian
i
e
x
f 1
2
)
,
(
2
2
)
(
N
i
x
x
d
i
i
D
Gaussian
i
e
x
x
x
x
f 1
2
)
,
(
2
2
)
(
)
,
(
influence of y
on x
total
influence on
x
gradient of x
in the
direction of xi
51
52. Uses grid cells but only keeps information about grid cells that do
actually contain data points and manages these cells in a tree-based
access structure
Influence function: describes the impact of a data point within its
neighborhood
Overall density of the data space can be calculated as the sum of the
influence function of all data points
Clusters can be determined mathematically by identifying density
attractors
Density attractors are local maximal of the overall density function
Center defined clusters: assign to each density attractor the points
density attracted to it
Arbitrary shaped cluster: merge density attractors that are connected
through paths of high density (> threshold)
Denclue: Technical Essence
52
56. Grid-Based Clustering Method
Using multi-resolution grid data structure
Several interesting methods
STING (a STatistical INformation Grid approach) by
Wang, Yang and Muntz (1997)
WaveCluster by Sheikholeslami, Chatterjee, and
Zhang (VLDB’98)
A multi-resolution clustering approach using
wavelet method
CLIQUE: Agrawal, et al. (SIGMOD’98)
Both grid-based and subspace clustering
56
57. STING: A Statistical Information Grid Approach
Wang, Yang and Muntz (VLDB’97)
The spatial area is divided into rectangular cells
There are several levels of cells corresponding to different
levels of resolution
57
i-th layer
(i-1)st layer
1st layer
58. The STING Clustering Method
Each cell at a high level is partitioned into a number of
smaller cells in the next lower level
Statistical info of each cell is calculated and stored
beforehand and is used to answer queries
Parameters of higher level cells can be easily calculated
from parameters of lower level cell
count, mean, s, min, max
type of distribution—normal, uniform, etc.
Use a top-down approach to answer spatial data queries
Start from a pre-selected layer—typically with a small
number of cells
For each cell in the current level compute the confidence
interval
58
59. STING Algorithm and Its Analysis
Remove the irrelevant cells from further consideration
When finish examining the current layer, proceed to the
next lower level
Repeat this process until the bottom layer is reached
Advantages:
Query-independent, easy to parallelize, incremental
update
O(K), where K is the number of grid cells at the lowest
level
Disadvantages:
All the cluster boundaries are either horizontal or
vertical, and no diagonal boundary is detected
59
60. 60
CLIQUE (Clustering In QUEst)
Agrawal, Gehrke, Gunopulos, Raghavan (SIGMOD’98)
Automatically identifying subspaces of a high dimensional data space
that allow better clustering than original space
CLIQUE can be considered as both density-based and grid-based
It partitions each dimension into the same number of equal length
interval
It partitions an m-dimensional data space into non-overlapping
rectangular units
A unit is dense if the fraction of total data points contained in the unit
exceeds the input model parameter
A cluster is a maximal set of connected dense units within a
subspace
61. 61
CLIQUE: The Major Steps
Partition the data space and find the number of points that lie
inside each cell of the partition.
Identify the subspaces that contain clusters using the Apriori
principle
Identify clusters
Determine dense units in all subspaces of interests
Determine connected dense units in all subspaces of
interests.
Generate minimal description for the clusters
Determine maximal regions that cover a cluster of
connected dense units for each cluster
Determination of minimal cover for each cluster
63. 63
Strength and Weakness of CLIQUE
Strength
automatically finds subspaces of the highest
dimensionality such that high density clusters exist in
those subspaces
insensitive to the order of records in input and does not
presume some canonical data distribution
scales linearly with the size of input and has good
scalability as the number of dimensions in the data
increases
Weakness
The accuracy of the clustering result may be degraded
at the expense of simplicity of the method
65. Assessing Clustering Tendency
Assess if non-random structure exists in the data by measuring the
probability that the data is generated by a uniform data distribution
Test spatial randomness by statistic test: Hopkins Static
Given a dataset D regarded as a sample of a random variable o,
determine how far away o is from being uniformly distributed in the
data space
Sample n points, p1, …, pn, uniformly from D. For each pi, find its
nearest neighbor in D: xi = min{dist (pi, v)} where v in D
Sample n points, q1, …, qn, uniformly from D. For each qi, find its
nearest neighbor in D – {qi}: yi = min{dist (qi, v)} where v in D and v ≠
qi
Calculate the Hopkins Statistic:
If D is uniformly distributed, ∑ xi and ∑ yi will be close to each other
and H is close to 0.5. If D is highly skewed, H is close to 0
65
66. Determine the Number of Clusters
Empirical method
# of clusters ≈√n/2 for a dataset of n points
Elbow method
Use the turning point in the curve of sum of within cluster variance
w.r.t the # of clusters
Cross validation method
Divide a given data set into m parts
Use m – 1 parts to obtain a clustering model
Use the remaining part to test the quality of the clustering
E.g., For each point in the test set, find the closest centroid, and
use the sum of squared distance between all points in the test
set and the closest centroids to measure how well the model fits
the test set
For any k > 0, repeat it m times, compare the overall quality measure
w.r.t. different k’s, and find # of clusters that fits the data the best
66
67. Measuring Clustering Quality
Two methods: extrinsic vs. intrinsic
Extrinsic: supervised, i.e., the ground truth is available
Compare a clustering against the ground truth using
certain clustering quality measure
Ex. BCubed precision and recall metrics
Intrinsic: unsupervised, i.e., the ground truth is unavailable
Evaluate the goodness of a clustering by considering
how well the clusters are separated, and how compact
the clusters are
Ex. Silhouette coefficient
67
68. Measuring Clustering Quality: Extrinsic Methods
Clustering quality measure: Q(C, Cg), for a clustering C
given the ground truth Cg.
Q is good if it satisfies the following 4 essential criteria
Cluster homogeneity: the purer, the better
Cluster completeness: should assign objects belong to
the same category in the ground truth to the same cluster
Rag bag: putting a heterogeneous object into a pure
cluster should be penalized more than putting it into a rag
bag (i.e., “miscellaneous” or “other” category)
Small cluster preservation: splitting a small category into
pieces is more harmful than splitting a large category into
pieces
68
70. Summary
Cluster analysis groups objects based on their similarity and has wide
applications
Measure of similarity can be computed for various types of data
Clustering algorithms can be categorized into partitioning methods,
hierarchical methods, density-based methods, grid-based methods,
and model-based methods
K-means and K-medoids algorithms are popular partitioning-based
clustering algorithms
Birch and Chameleon are interesting hierarchical clustering
algorithms, and there are also probabilistic hierarchical clustering
algorithms
DBSCAN, OPTICS, and DENCLU are interesting density-based
algorithms
STING and CLIQUE are grid-based methods, where CLIQUE is also a
subspace clustering algorithm
Quality of clustering results can be evaluated in various ways
70
71. 71
CS512-Spring 2011: An Introduction
Coverage
Cluster Analysis: Chapter 11
Outlier Detection: Chapter 12
Mining Sequence Data: BK2: Chapter 8
Mining Graphs Data: BK2: Chapter 9
Social and Information Network Analysis
BK2: Chapter 9
Partial coverage: Mark Newman: “Networks: An Introduction”, Oxford U., 2010
Scattered coverage: Easley and Kleinberg, “Networks, Crowds, and Markets:
Reasoning About a Highly Connected World”, Cambridge U., 2010
Recent research papers
Mining Data Streams: BK2: Chapter 8
Requirements
One research project
One class presentation (15 minutes)
Two homeworks (no programming assignment)
Two midterm exams (no final exam)
72. References (1)
R. Agrawal, J. Gehrke, D. Gunopulos, and P. Raghavan. Automatic subspace
clustering of high dimensional data for data mining applications. SIGMOD'98
M. R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973.
M. Ankerst, M. Breunig, H.-P. Kriegel, and J. Sander. Optics: Ordering points
to identify the clustering structure, SIGMOD’99.
Beil F., Ester M., Xu X.: "Frequent Term-Based Text Clustering", KDD'02
M. M. Breunig, H.-P. Kriegel, R. Ng, J. Sander. LOF: Identifying Density-Based
Local Outliers. SIGMOD 2000.
M. Ester, H.-P. Kriegel, J. Sander, and X. Xu. A density-based algorithm for
discovering clusters in large spatial databases. KDD'96.
M. Ester, H.-P. Kriegel, and X. Xu. Knowledge discovery in large spatial
databases: Focusing techniques for efficient class identification. SSD'95.
D. Fisher. Knowledge acquisition via incremental conceptual clustering.
Machine Learning, 2:139-172, 1987.
D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An
approach based on dynamic systems. VLDB’98.
V. Ganti, J. Gehrke, R. Ramakrishan. CACTUS Clustering Categorical Data
Using Summaries. KDD'99.
72
73. References (2)
D. Gibson, J. Kleinberg, and P. Raghavan. Clustering categorical data: An
approach based on dynamic systems. In Proc. VLDB’98.
S. Guha, R. Rastogi, and K. Shim. Cure: An efficient clustering algorithm for
large databases. SIGMOD'98.
S. Guha, R. Rastogi, and K. Shim. ROCK: A robust clustering algorithm for
categorical attributes. In ICDE'99, pp. 512-521, Sydney, Australia, March
1999.
A. Hinneburg, D.l A. Keim: An Efficient Approach to Clustering in Large
Multimedia Databases with Noise. KDD’98.
A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Printice Hall, 1988.
G. Karypis, E.-H. Han, and V. Kumar. CHAMELEON: A Hierarchical Clustering
Algorithm Using Dynamic Modeling. COMPUTER, 32(8): 68-75, 1999.
L. Kaufman and P. J. Rousseeuw. Finding Groups in Data: an Introduction to
Cluster Analysis. John Wiley & Sons, 1990.
E. Knorr and R. Ng. Algorithms for mining distance-based outliers in large
datasets. VLDB’98.
73
74. References (3)
G. J. McLachlan and K.E. Bkasford. Mixture Models: Inference and Applications to
Clustering. John Wiley and Sons, 1988.
R. Ng and J. Han. Efficient and effective clustering method for spatial data mining.
VLDB'94.
L. Parsons, E. Haque and H. Liu, Subspace Clustering for High Dimensional Data: A
Review, SIGKDD Explorations, 6(1), June 2004
E. Schikuta. Grid clustering: An efficient hierarchical clustering method for very large
data sets. Proc. 1996 Int. Conf. on Pattern Recognition
G. Sheikholeslami, S. Chatterjee, and A. Zhang. WaveCluster: A multi-resolution
clustering approach for very large spatial databases. VLDB’98.
A. K. H. Tung, J. Han, L. V. S. Lakshmanan, and R. T. Ng. Constraint-Based Clustering
in Large Databases, ICDT'01.
A. K. H. Tung, J. Hou, and J. Han. Spatial Clustering in the Presence of Obstacles,
ICDE'01
H. Wang, W. Wang, J. Yang, and P.S. Yu. Clustering by pattern similarity in large data
sets, SIGMOD’02
W. Wang, Yang, R. Muntz, STING: A Statistical Information grid Approach to Spatial
Data Mining, VLDB’97
T. Zhang, R. Ramakrishnan, and M. Livny. BIRCH : An efficient data clustering method
for very large databases. SIGMOD'96
X. Yin, J. Han, and P. S. Yu, “LinkClus: Efficient Clustering via Heterogeneous Semantic
Links”, VLDB'06
74
76. 76
A Typical K-Medoids Algorithm (PAM)
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 20
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
K=2
Arbitrar
y choose
k object
as initial
medoids
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Assign
each
remaini
ng
object to
nearest
medoids
Randomly select a
nonmedoid
object,Oramdom
Compute
total cost
of
swapping
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
Total Cost = 26
Swapping O
and Oramdom
If quality is
improved.
Do loop
Until no
change
0
1
2
3
4
5
6
7
8
9
10
0 1 2 3 4 5 6 7 8 9 10
77. 77
PAM (Partitioning Around Medoids) (1987)
PAM (Kaufman and Rousseeuw, 1987), built in Splus
Use real object to represent the cluster
Select k representative objects arbitrarily
For each pair of non-selected object h and selected
object i, calculate the total swapping cost TCih
For each pair of i and h,
If TCih < 0, i is replaced by h
Then assign each non-selected object to the most
similar representative object
repeat steps 2-3 until there is no change
78. 78
PAM Clustering: Finding the Best Cluster Center
Case 1: p currently belongs to oj. If oj is replaced by orandom as a
representative object and p is the closest to one of the other
representative object oi, then p is reassigned to oi
79. 79
What Is the Problem with PAM?
Pam is more robust than k-means in the presence of
noise and outliers because a medoid is less influenced by
outliers or other extreme values than a mean
Pam works efficiently for small data sets but does not
scale well for large data sets.
O(k(n-k)2
) for each iteration
where n is # of data,k is # of clusters
Sampling-based method
CLARA(Clustering LARge Applications)
80. 80
CLARA (Clustering Large Applications)
(1990)
CLARA (Kaufmann and Rousseeuw in 1990)
Built in statistical analysis packages, such as SPlus
It draws multiple samples of the data set, applies PAM
on each sample, and gives the best clustering as the
output
Strength: deals with larger data sets than PAM
Weakness:
Efficiency depends on the sample size
A good clustering based on samples will not necessarily
represent a good clustering of the whole data set if the
sample is biased
81. 81
CLARANS (“Randomized” CLARA) (1994)
CLARANS (A Clustering Algorithm based on Randomized
Search) (Ng and Han’94)
Draws sample of neighbors dynamically
The clustering process can be presented as searching a
graph where every node is a potential solution, that is, a
set of k medoids
If the local optimum is found, it starts with new randomly
selected node in search for a new local optimum
Advantages: More efficient and scalable than both PAM
and CLARA
Further improvement: Focusing techniques and spatial
access structures (Ester et al.’95)
82. 82
ROCK: Clustering Categorical Data
ROCK: RObust Clustering using linKs
S. Guha, R. Rastogi & K. Shim, ICDE’99
Major ideas
Use links to measure similarity/proximity
Not distance-based
Algorithm: sampling-based clustering
Draw random sample
Cluster with links
Label data in disk
Experiments
Congressional voting, mushroom data
83. 83
Similarity Measure in ROCK
Traditional measures for categorical data may not work well, e.g.,
Jaccard coefficient
Example: Two groups (clusters) of transactions
C1. <a, b, c, d, e>: {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d}, {a, c, e},
{a, d, e}, {b, c, d}, {b, c, e}, {b, d, e}, {c, d, e}
C2. <a, b, f, g>: {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Jaccard co-efficient may lead to wrong clustering result
C1: 0.2 ({a, b, c}, {b, d, e}} to 0.5 ({a, b, c}, {a, b, d})
C1 & C2: could be as high as 0.5 ({a, b, c}, {a, b, f})
Jaccard co-efficient-based similarity function:
Ex. Let T1 = {a, b, c}, T2 = {c, d, e}
Sim T T
T T
T T
( , )
1 2
1 2
1 2
2
.
0
5
1
}
,
,
,
,
{
}
{
)
,
( 2
1
e
d
c
b
a
c
T
T
Sim
84. 84
Link Measure in ROCK
Clusters
C1:<a, b, c, d, e>: {a, b, c}, {a, b, d}, {a, b, e}, {a, c, d}, {a, c, e}, {a, d, e}, {b,
c, d}, {b, c, e}, {b, d, e}, {c, d, e}
C2: <a, b, f, g>: {a, b, f}, {a, b, g}, {a, f, g}, {b, f, g}
Neighbors
Two transactions are neighbors if sim(T1,T2) > threshold
Let T1 = {a, b, c}, T2 = {c, d, e}, T3 = {a, b, f}
T1 connected to: {a,b,d}, {a,b,e}, {a,c,d}, {a,c,e}, {b,c,d}, {b,c,e}, {a,b,f},
{a,b,g}
T2 connected to: {a,c,d}, {a,c,e}, {a,d,e}, {b,c,e}, {b,d,e}, {b,c,d}
T3 connected to: {a,b,c}, {a,b,d}, {a,b,e}, {a,b,g}, {a,f,g}, {b,f,g}
Link Similarity
Link similarity between two transactions is the # of common neighbors
link(T1, T2) = 4, since they have 4 common neighbors
{a, c, d}, {a, c, e}, {b, c, d}, {b, c, e}
link(T1, T3) = 3, since they have 3 common neighbors
{a, b, d}, {a, b, e}, {a, b, g}
85. Aggregation-Based Similarity Computation
4 5
10 12 13 14
a b
ST2
ST1
11
0.2
0.9 1.0 0.8 0.9 1.0
For each node nk {
∈ n10, n11, n12} and nl {
∈ n13, n14}, their path-
based similarity simp(nk, nl) = s(nk, n4)·s(n4, n5)·s(n5, nl).
171
.
0
2
,
,
3
,
,
14
13 5
5
4
12
10 4
l l
k k
b
a
n
n
s
n
n
s
n
n
s
n
n
sim
After aggregation, we reduce quadratic time computation to
linear time computation.
takes O(3+2) time
86
86. Computing Similarity with Aggregation
To compute sim(na,nb):
Find all pairs of sibling nodes ni and nj, so that na linked with ni and nb with
nj.
Calculate similarity (and weight) between na and nb w.r.t. ni and nj.
Calculate weighted average similarity between na and nb w.r.t. all such pairs.
sim(na, nb) = avg_sim(na,n4) x s(n4, n5) x avg_sim(nb,n5)
= 0.9 x 0.2 x 0.95 = 0.171
sim(na, nb) can be computed
from aggregated similarities
Average similarity
and total weight 4 5
10 12 13 14
a b
a:
(0.9,3)
b:(0.95,2)
11
0.2
87
88. Link-Based Clustering: Calculate Similarities
Based On Links
Jeh & Widom, KDD’2002: SimRank
Two objects are similar if they are
linked with the same or similar
objects
The similarity between two
objects x and y is defined as
the average similarity between
objects linked with x and those
with y:
Issue: Expensive to compute:
For a dataset of N objects
and M links, it takes O(N2)
space and O(M2
) time to
compute all similarities.
Tom sigmod03
Mike
Cathy
John
sigmod04
sigmod05
vldb03
vldb04
vldb05
sigmod
vldb
Mary
aaai04
aaai05
aaai
Authors Proceedings Conferences
a
I
i
b
I
j
j
i b
I
a
I
b
I
a
I
C
b
a
1 1
,
sim
,
sim
89
89. Observation 1: Hierarchical Structures
Hierarchical structures often exist naturally among objects
(e.g., taxonomy of animals)
All
electronics
grocery apparel
DVD camera
TV
A hierarchical structure of
products in Walmart
Articles
Words
Relationships between articles and
words (Chakrabarti, Papadimitriou,
Modha, Faloutsos, 2004)
90
90. Observation 2: Distribution of Similarity
Power law distribution exists in similarities
56% of similarity entries are in [0.005, 0.015]
1.4% of similarity entries are larger than 0.1
Can we design a data structure that stores the significant
similarities and compresses insignificant ones?
0
0.1
0.2
0.3
0.4
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.22
0.24
similarity value
portion
of
entries
Distribution of SimRank similarities
among DBLP authors
91
91. A Novel Data Structure: SimTree
Each leaf node
represents an object
Each non-leaf node
represents a group
of similar lower-level
nodes
Similarities between
siblings are stored
Consumer
electronics
Apparels
Canon A40
digital camera
Sony V3 digital
camera
Digital
Cameras
TVs
92
92. Similarity Defined by SimTree
Path-based node similarity
simp(n7,n8) = s(n7, n4) x s(n4, n5) x s(n5, n8)
Similarity between two nodes is the average similarity
between objects linked with them in other SimTrees
Adjust/ ratio for x =
n1 n2
n4 n5
n6
n3
0.9 1.0
0.9
0.8
0.2
n7 n9
0.3
n8
0.8
0.9
Similarity between two
sibling nodes n1 and n2
Adjustment ratio
for node n7
Average similarity between x and all other nodes
Average similarity between x’s parent and all other
nodes
93
93. LinkClus: Efficient Clustering via
Heterogeneous Semantic Links
Method
Initialize a SimTree for objects of each type
Repeat until stable
For each SimTree, update the similarities between its
nodes using similarities in other SimTrees
Similarity between two nodes x and y is the average
similarity between objects linked with them
Adjust the structure of each SimTree
Assign each node to the parent node that it is most
similar to
For details: X. Yin, J. Han, and P. S. Yu, “LinkClus: Efficient
Clustering via Heterogeneous Semantic Links”, VLDB'06
94
94. Initialization of SimTrees
Initializing a SimTree
Repeatedly find groups of tightly related nodes, which
are merged into a higher-level node
Tightness of a group of nodes
For a group of nodes {n1, …, nk}, its tightness is
defined as the number of leaf nodes in other SimTrees
that are connected to all of {n1, …, nk}
n1
1
2
3
4
5
n2
The tightness of {n1, n2} is 3
Nodes Leaf nodes in
another
SimTree
95
95. Finding Tight Groups by Freq. Pattern Mining
Finding tight groups Frequent pattern mining
Procedure of initializing a tree
Start from leaf nodes (level-0)
At each level l, find non-overlapping groups of similar
nodes with frequent pattern mining
Reduced to
g1
g2
{n1}
{n1, n2}
{n2}
{n1, n2}
{n1, n2}
{n2, n3,
n4}
{n4}
{n3, n4}
{n3, n4}
Transactions
n1
1
2
3
4
5
6
7
8
9
n2
n3
n4
The tightness of a
group of nodes is the
support of a frequent
pattern
96
96. Adjusting SimTree Structures
After similarity changes, the tree structure also needs to be
changed
If a node is more similar to its parent’s sibling, then move
it to be a child of that sibling
Try to move each node to its parent’s sibling that it is most
similar to, under the constraint that each parent node can
have at most c children
n1 n2
n4 n5
n6
n3
n7 n9
n8
0.8
0.9
n7
97
97. Complexity
Time Space
Updating similarities O(M(logN)2
) O(M+N)
Adjusting tree structures O(N) O(N)
LinkClus O(M(logN)2
) O(M+N)
SimRank O(M2
) O(N2
)
For two types of objects, N in each, and M linkages between them.
98
98. Experiment: Email Dataset
F. Nielsen. Email dataset.
www.imm.dtu.dk/~rem/data/Email-1431.zip
370 emails on conferences, 272 on jobs,
and 789 spam emails
Accuracy: measured by manually labeled
data
Accuracy of clustering: % of pairs of objects
in the same cluster that share common label
Approach Accuracy time (s)
LinkClus 0.8026 1579.6
SimRank 0.7965 39160
ReCom 0.5711 74.6
F-SimRank 0.3688 479.7
CLARANS 0.4768 8.55
Approaches compared:
SimRank (Jeh & Widom, KDD 2002): Computing pair-wise similarities
SimRank with FingerPrints (F-SimRank): Fogaras & R´acz, WWW 2005
pre-computes a large sample of random paths from each object and uses
samples of two objects to estimate SimRank similarity
ReCom (Wang et al. SIGIR 2003)
Iteratively clustering objects using cluster labels of linked objects
99
99. WaveCluster: Clustering by Wavelet Analysis (1998)
Sheikholeslami, Chatterjee, and Zhang (VLDB’98)
A multi-resolution clustering approach which applies wavelet transform to
the feature space; both grid-based and density-based
Wavelet transform: A signal processing technique that decomposes a signal
into different frequency sub-band
Data are transformed to preserve relative distance between objects at
different levels of resolution
Allows natural clusters to become more distinguishable
100
100. The WaveCluster Algorithm
How to apply wavelet transform to find clusters
Summarizes the data by imposing a multidimensional grid
structure onto data space
These multidimensional spatial data objects are represented in a
n-dimensional feature space
Apply wavelet transform on feature space to find the dense
regions in the feature space
Apply wavelet transform multiple times which result in clusters at
different scales from fine to coarse
Major features:
Complexity O(N)
Detect arbitrary shaped clusters at different scales
Not sensitive to noise, not sensitive to input order
Only applicable to low dimensional data
101
101. 102
Quantization
& Transformation
Quantize data into m-D grid structure,
then wavelet transform
a) scale 1: high resolution
b) scale 2: medium resolution
c) scale 3: low resolution
Editor's Notes
#95: We use this simple definition of tightness for efficiency concerns.