In the context of -means, we want to partition the space of our observations into classes. each observation belongs to the cluster with the nearest mean. Here “nearest” is in the sense of some norm, usually the (Euclidean) norm.
Consider the case where we have 2 classes. The means being respectively the 2 black dots. If we partition based on the nearest mean, with the (Euclidean) norm we get the graph on the left, and with the (Manhattan) norm, the one on the right,
Points in the red region are closer to the mean in the upper part, while points in the blue region are closer to the mean in the lower part. Here, we will always use the standard (Euclidean) norm. Note that the graph above is related to Voronoi diagrams (or Voronoy, from Вороний in Ukrainian, or Вороно́й in Russian) with 2 points, the 2 means.
Here, we have 5 groups. So let us run a 5-means algorithm here.
- we draw randomly 5 points in the space (intial values for the means),
- in the assignment step, we assign each point to the nearest mean
- in the update step, we compute the new centroids of the clusters
To visualize it, see
The code the get the clusters is
kmeans(pts, centers=5, nstart = 1, algorithm = "Lloyd")
Observe that the assignment step is based on computations of Voronoi sets. This can be done in R using
This is what we can visualize below
km1 <- kmeans(pts, centers=5, nstart = 1, algorithm = "Lloyd") library(tripack) library(RColorBrewer) CL5 <- brewer.pal(5, "Pastel1") V <- voronoi.mosaic(km1$centers[,1],km1$centers[,2]) P <- voronoi.polygons(V) plot(pts,pch=19,xlim=0:1,ylim=0:1,xlab="",ylab="",col=CL5[km1$cluster]) points(km1$centers[,1],km1$centers[,2],pch=3,cex=1.5,lwd=2) plot(V,add=TRUE)
Here, starting points are draw randomly. If we run it again, we might get
On that dataset, it is difficult to get cluster that are the five groups we can actually see. If we use
we usually get something better