Clustering Concepts: Difference between revisions
Line 36: | Line 36: | ||
The objective function in this case is the minimum distance between two separated points, which belong to two distinct clusters. | The objective function in this case is the minimum distance between two separated points, which belong to two distinct clusters. | ||
<font size=-1> | <font size=-1> | ||
min<sub>separated p, q</sub> | min d(p,q) | ||
<sub>separated p, q</sub> | |||
</font> | </font> | ||
"Good" clustering means that all of the separated points to be as far apart as possible, the spacing to be big, bigger the better. This is an example of a greedy algorithm that identifies k cluster with the maximum spacing possible: | "Good" clustering means that all of the separated points to be as far apart as possible, the spacing to be big, bigger the better. This is an example of a greedy algorithm that identifies k cluster with the maximum spacing possible: |
Revision as of 21:00, 23 October 2021
External
Internal
Overview
We talk about "clustering" when we have a set of n "points", which we may think about as points in space in geometrical sense. It is actually quite rare that the underlying problem we care about is intrinsically geometric. Usually we are representing something else we care about (web pages, genome sequence fragments, etc.) and we want to cluster them in coherent groups. In machine learning, the same problem is referred to as "unsupervised learning", meaning that the data is unlabeled and we are looking for patterns in data, when data is not annotated.
Distance Function
Clustering problems require a similarity measure, which is a function that for any two object returns a numerical result that expresses how similar (or dissimilar) those objects are. We refer to this function as the distance function. The distance function d(p,q) returns the distance between each point pair.
The distance function is symmetric: d(p,q) = d(q,p).
Examples of distance functions: euclidian distance between two points in space, the penalty of the best alignment between two genome fragments, etc.
Coherent Groups
A coherent group in the universe of points are those points that have small distances from each other, so they are "similar". They are considered to be "nearby" to each other, and thus form clusters.
Objective Function
A way to evaluate how good the clustering is is to define an objective function, and seek out the clustering that optimizes the objective function. This approach is called optimization. There is more than one objective functions, and among the known objective functions there are the K-Means objective function, etc.:
K-Means
K-Means clustering intends to partition n objects into k clusters in which each object belongs to the cluster with the nearest mean. This method produces exactly k different clusters of greatest possible distinction. The best number of clusters k leading to the greatest separation (distance) is not known a priori and must be computed from the data. The objective of K-Means clustering is to minimize total intra-cluster variance, or, the squared error objective function J:
k n J = ∑ ∑ ‖xi(j)-cj‖2 j=1 i=1
where:
- k is the number of clusters
- n is the number of cases
- xi(j) is the case i
- cj is the centroid for cluster j
- ‖...‖ is the distance function
Spacing of a K-Clustering
The objective function in this case is the minimum distance between two separated points, which belong to two distinct clusters.
min d(p,q) separated p, q
"Good" clustering means that all of the separated points to be as far apart as possible, the spacing to be big, bigger the better. This is an example of a greedy algorithm that identifies k cluster with the maximum spacing possible:
The Clustering Problem
The clustering problem consists in grouping a set of n objects in k clusters in such a way to minimize (maximize) an objective function, given a specific distance function between any two pairs of objects.
Clustering Algorithms
There are many clustering algorithms, some of them are greedy. Example of clustering algorithms: