0% found this document useful (0 votes)
33 views

Data Stream Clustering

Data stream clustering involves clustering data that arrives continuously, such as phone records or financial transactions. It aims to construct good clusterings using small amounts of memory and time. One of the earliest algorithms was STREAM, which can solve k-median clustering in a single pass using small, constant space. Other common algorithms include BIRCH, COBWEB, C2ICM, and CluStream.

Uploaded by

john949
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views

Data Stream Clustering

Data stream clustering involves clustering data that arrives continuously, such as phone records or financial transactions. It aims to construct good clusterings using small amounts of memory and time. One of the earliest algorithms was STREAM, which can solve k-median clustering in a single pass using small, constant space. Other common algorithms include BIRCH, COBWEB, C2ICM, and CluStream.

Uploaded by

john949
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Data stream clustering

In computer science, data stream clustering is defined as the clustering of data that arrive continuously
such as telephone records, multimedia data, financial transactions etc. Data stream clustering is usually
studied as a streaming algorithm and the objective is, given a sequence of points, to construct a good
clustering of the stream, using a small amount of memory and time.

History
Data stream clustering has recently attracted attention for emerging applications that involve large amounts
of streaming data. For clustering, k-means is a widely used heuristic but alternate algorithms have also been
developed such as k-medoids, CURE and the popular BIRCH. For data streams, one of the first results
appeared in 1980[1] but the model was formalized in 1998.[2]

Definition
The problem of data stream clustering is defined as:

Input: a sequence of n points in metric space and an integer k.


Output: k centers in the set of the n points so as to minimize the sum of distances from data points to their
closest cluster centers.

This is the streaming version of the k-median problem.

Algorithms

STREAM

STREAM is an algorithm for clustering data streams described by Guha, Mishra, Motwani and
O'Callaghan[3] which achieves a constant factor approximation for the k-Median problem in a single pass
and using small space.

Theorem —  STREAM can solve the k-Median problem on a data stream in a single pass,
with time O(n1+e) and space θ(nε) up to a factor 2O(1/e), where n the number of points and
.

To understand STREAM, the first step is to show that clustering can take place in small space (not caring
about the number of passes). Small-Space is a divide-and-conquer algorithm that divides the data, S, into
pieces, clusters each one of them (using k-means) and then clusters the centers obtained.

Algorithm Small-Space(S)

1. Divide S into disjoint pieces .


2. For each i, find centers in Xi.
Assign each point in Xi to its closest
center.
3. Let X' be the centers obtained in
(2), where each center c is weighted by
the number of points assigned to it.
4. Cluster X' to find k centers.
Small-Space Algorithm representation
Where, if in Step 2 we run a bicriteria -
approximation algorithm which outputs at most
ak medians with cost at most b times the optimum k-Median solution and in Step 4 we run a c-
approximation algorithm then the approximation factor of Small-Space() algorithm is . We
can also generalize Small-Space so that it recursively calls itself i times on a successively smaller set of
weighted centers and achieves a constant factor approximation to the k-median problem.

The problem with the Small-Space is that the number of subsets that we partition S into is limited, since it
has to store in memory the intermediate medians in X. So, if M is the size of memory, we need to partition S
into subsets such that each subset fits in memory, ( ) and so that the weighted centers also fit in
memory, . But such an may not always exist.

The STREAM algorithm solves the problem of storing intermediate medians and achieves better running
time and space requirements. The algorithm works as follows:[3]

1. Input the first m points; using the randomized algorithm presented in[3] reduce these to
(say 2k) points.
2. Repeat the above till we have seen m2/(2k) of the original data points. We now have m
intermediate medians.
3. Using a local search algorithm, cluster these m first-level medians into 2k second-level
medians and proceed.
4. In general, maintain at most m level-i medians, and, on seeing m, generate 2k level-i+ 1
medians, with the weight of a new median as the sum of the weights of the intermediate
medians assigned to it.
5. When we have seen all the original data points, we cluster all the intermediate medians into
k final medians, using the primal dual algorithm.[4]

Other algorithms

Other well-known algorithms used for data stream clustering are:

BIRCH:[5] builds a hierarchical data structure to incrementally cluster the incoming points
using the available memory and minimizing the amount of I/O required. The complexity of
the algorithm is since one pass suffices to get a good clustering (though, results can
be improved by allowing several passes).
COBWEB:[6][7] is an incremental clustering technique that keeps a hierarchical clustering
model in the form of a classification tree. For each new point COBWEB descends the tree,
updates the nodes along the way and looks for the best node to put the point on (using a
category utility function).
C2ICM:[8] builds a flat partitioning clustering structure by selecting some objects as cluster
seeds/initiators and a non-seed is assigned to the seed that provides the highest coverage,
addition of new objects can introduce new seeds and falsify some existing old seeds, during
incremental clustering new objects and the members of the falsified clusters are assigned to
one of the existing new/old seeds.
CluStream:[9] uses micro-clusters that are temporal extensions of BIRCH[5] cluster feature
vector, so that it can decide if a micro-cluster can be newly created, merged or forgotten
based in the analysis of the squared and linear sum of the current micro-clusters data-points
and timestamps, and then at any point in time one can generate macro-clusters by clustering
these micro-clustering using an offline clustering algorithm like K-Means, thus producing a
final clustering result.

References
1. Munro, J.; Paterson, M. (1980). "Selection and Sorting with Limited Storage" (https://doi.org/1
0.1016%2F0304-3975%2880%2990061-4). Theoretical Computer Science. 12 (3): 315–
323. doi:10.1016/0304-3975(80)90061-4 (https://doi.org/10.1016%2F0304-3975%2880%29
90061-4).
2. Henzinger, M.; Raghavan, P.; Rajagopalan, S. (August 1998). "Computing on Data
Streams". Digital Equipment Corporation. TR-1998-011. CiteSeerX 10.1.1.19.9554 (https://ci
teseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.19.9554).
3. Guha, S.; Mishra, N.; Motwani, R.; O'Callaghan, L. (2000). "Clustering Data Streams".
Proceedings of the Annual Symposium on Foundations of Computer Science: 359–366.
CiteSeerX 10.1.1.32.1927 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.32.19
27). doi:10.1109/SFCS.2000.892124 (https://doi.org/10.1109%2FSFCS.2000.892124).
ISBN 0-7695-0850-2. S2CID 2767180 (https://api.semanticscholar.org/CorpusID:2767180).
4. Jain, K.; Vazirani, V. (1999). Primal-dual approximation algorithms for metric facility location
and k-median problems (http://portal.acm.org/citation.cfm?id=796509). Proc. FOCS. Focs
'99. pp. 2–. ISBN 9780769504094.
5. Zhang, T.; Ramakrishnan, R.; Linvy, M. (1996). "BIRCH: An Efficient Data Clustering Method
for Very Large Databases" (https://doi.org/10.1145%2F235968.233324). Proceedings of the
ACM SIGMOD Conference on Management of Data. 25 (2): 103–114.
doi:10.1145/235968.233324 (https://doi.org/10.1145%2F235968.233324).
6. Fisher, D. H. (1987). "Knowledge Acquisition Via Incremental Conceptual Clustering" (http
s://doi.org/10.1023%2FA%3A1022852608280). Machine Learning. 2 (2): 139–172.
doi:10.1023/A:1022852608280 (https://doi.org/10.1023%2FA%3A1022852608280).
7. Fisher, D. H. (1996). "Iterative Optimization and Simplification of Hierarchical Clusterings".
Journal of AI Research. 4. arXiv:cs/9604103 (https://arxiv.org/abs/cs/9604103).
CiteSeerX 10.1.1.6.9914 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.6.991
4).
8. Can, F. (1993). "Incremental Clustering for Dynamic Information Processing". ACM
Transactions on Information Systems. 11 (2): 143–164. doi:10.1145/130226.134466 (https://
doi.org/10.1145%2F130226.134466). S2CID 1691726 (https://api.semanticscholar.org/Corp
usID:1691726).
9. Aggarwal, Charu C.; Yu, Philip S.; Han, Jiawei; Wang, Jianyong (2003). "A Framework for
Clustering Evolving Data Streams" (http://www.vldb.org/conf/2003/papers/S04P02.pdf)
(PDF). Proceedings 2003 VLDB Conference: 81–92. doi:10.1016/B978-012722442-
8/50016-1 (https://doi.org/10.1016%2FB978-012722442-8%2F50016-1).
ISBN 9780127224428. S2CID 2354576 (https://api.semanticscholar.org/CorpusID:235457
6).

Retrieved from "https://en.wikipedia.org/w/index.php?title=Data_stream_clustering&oldid=1140962295"

You might also like