Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

In 2002, Matthias Schonlau published in “The Stata Journal” an article named “The Clustergram: A graph for visualizing hierarchical and . As explained in the abstract:

In hierarchical cluster analysis dendrogram graphs are used to visualize how clusters are formed. I propose an alternative graph named “clustergram” to examine how cluster members are assigned to clusters as the number of clusters increases.
This graph is useful in exploratory analysis for non-hierarchical clustering algorithms like k-means and for hierarchical cluster algorithms when the number of observations is large enough to make dendrograms impractical.

A similar article was later written and was (maybe) published in “computational statistics”.

Both articles gives some nice background to known methods like k-means and methods for hierarchical clustering, and then goes on to present examples of using these methods (with the Clustergarm) to analyse some datasets.

Personally, I understand the clustergram to be a type of parallel coordinates plot where each observation is given a vector. The vector contains the observation’s location according to how many clusters the dataset was split into. The scale of the vector is the scale of the first principal component of the data.

### Clustergram in R (a basic function)

After finding out about this method of visualization, I was hunted by the curiosity to play with it a bit. Therefore, and since I didn’t find any implementation of the graph in R, I went about writing the code to implement it.

The code only works for kmeans, but it shows how such a plot can be produced, and could be later modified so to offer methods that will connect with different clustering algorithms.

The function I present here gets a data.frame/matrix with a row for each observation, and the variable dimensions present in the columns.
The function assumes the data is scaled.
The function then goes about calculating the cluster centers for our data, for varying number of clusters.
For each cluster iteration, the cluster centers are multiplied by the first loading of the principal components of the original data. Thus offering a weighted mean of the each cluster center dimensions that might give a decent representation of that cluster (this method has the known limitations of using the first component of a PCA for dimensionality reduction, but I won’t go into that in this post).
Finally all of our data points are ordered according to their respective cluster first component, and plotted against the number of clusters (thus creating the clustergram).

My thank goes to Hadley Wickham for offering some good tips on how to prepare the graph.

Here is the code (example follows)



clustergram.kmeans <- function(Data, k, ...)
{
# this is the type of function that the clustergram
# 	function takes for the clustering.
# 	using similar structure will allow implementation of different clustering algorithms

#	It returns a list with two elements:
#	cluster = a vector of length of n (the number of subjects/items)
#				indicating to which cluster each item belongs.
#	centers = a k dimensional vector.  Each element is 1 number that represent that cluster
#				In our case, we are using the weighted mean of the cluster dimensions by
#				Using the first component (loading) of the PCA of the Data.

cl <- kmeans(Data, k,...)

cluster <- cl$cluster centers <- cl$centers %*% princomp(Data)$loadings[,1] # 1 number per center # here we are using the weighted mean for each return(list( cluster = cluster, centers = centers )) } clustergram.plot.matlines <- function(X,Y, k.range, x.range, y.range , COL, add.center.points , centers.points) { plot(0,0, col = "white", xlim = x.range, ylim = y.range, axes = F, xlab = "Number of clusters (k)", ylab = "PCA weighted Mean of the clusters", main = "Clustergram of the PCA-weighted Mean of the clusters k-mean clusters vs number of clusters (k)") axis(side =1, at = k.range) axis(side =2) abline(v = k.range, col = "grey") matlines(t(X), t(Y), pch = 19, col = COL, lty = 1, lwd = 1.5) if(add.center.points) { require(plyr) xx <- ldply(centers.points, rbind) points(xx$y~xx$x, pch = 19, col = "red", cex = 1.3) # add points # temp <- l_ply(centers.points, function(xx) { # with(xx,points(y~x, pch = 19, col = "red", cex = 1.3)) # points(xx$y~xx$x, pch = 19, col = "red", cex = 1.3) # return(1) # }) # We assign the lapply to a variable (temp) only to suppress the lapply "NULL" output } } clustergram <- function(Data, k.range = 2:10 , clustering.function = clustergram.kmeans, clustergram.plot = clustergram.plot.matlines, line.width = .004, add.center.points = T) { # Data - should be a scales matrix. Where each column belongs to a different dimension of the observations # k.range - is a vector with the number of clusters to plot the clustergram for # clustering.function - this is not really used, but offers a bases to later extend the function to other algorithms # Although that would more work on the code # line.width - is the amount to lift each line in the plot so they won't superimpose eachother # add.center.points - just assures that we want to plot points of the cluster means n <- dim(Data)[1] PCA.1 <- Data %*% princomp(Data)$loadings[,1]	# first principal component of our data

if(require(colorspace)) {
COL <- heat_hcl(n)[order(PCA.1)]	# line colors
} else {
COL <- rainbow(n)[order(PCA.1)]	# line colors
warning('Please consider installing the package "colorspace" for prittier colors')
}

line.width <- rep(line.width, n)

Y <- NULL	# Y matrix
X <- NULL	# X matrix

centers.points <- list()

for(k in k.range)
{
k.clusters <- clustering.function(Data, k)

clusters.vec <- k.clusters$cluster # the.centers <- apply(cl$centers,1, mean)
the.centers <- k.clusters\$centers

noise <- unlist(tapply(line.width, clusters.vec, cumsum))[order(seq_along(clusters.vec)[order(clusters.vec)])]
# noise <- noise - mean(range(noise))
y <- the.centers[clusters.vec] + noise
Y <- cbind(Y, y)
x <- rep(k, length(y))
X <- cbind(X, x)

centers.points[[k]] <- data.frame(y = the.centers , x = rep(k , k))
#	points(the.centers ~ rep(k , k), pch = 19, col = "red", cex = 1.5)
}

x.range <- range(k.range)
y.range <- range(PCA.1)

clustergram.plot(X,Y, k.range,
x.range, y.range , COL,

}

### Example on the iris dataset

The iris data set is a favorite example of many R bloggers when writing about R accessors , Data Exporting, Data importing, and for different visualization techniques.
So it seemed only natural to experiment on it here.

data(iris)
set.seed(250)
par(cex.lab = 1.5, cex.main = 1.2)
Data <- scale(iris[,-5]) # notice I am scaling the vectors)
clustergram(Data, k.range = 2:8, line.width = 0.004) # notice how I am using line.width.  Play with it on your problem, according to the scale of Y.

Here is the output:

Looking at the image we can notice a few interesting things. We notice that one of the clusters formed (the lower one) stays as is no matter how many clusters we are allowing (except for one observation that goes way and then beck).
We can also see that the second split is a solid one (in the sense that it splits the first cluster into two clusters which are not “close” to each other, and that about half the observations goes to each of the new clusters).
And then notice how moving to 5 clusters makes almost no difference.
Lastly, notice how when going for 8 clusters, we are practically left with 4 clusters (remember – this is according the mean of cluster centers by the loading of the first component of the PCA on the data)

If I where to take something from this graph, I would say I have a strong tendency to use 3-4 clusters on this data.

But wait, did our clustering algorithm do a stable job?
Let’s try running the algorithm 6 more times (each run will have a different starting point for the clusters)

set.seed(500)
Data <- scale(iris[,-5]) # notice I am scaling the vectors)
par(cex.lab = 1.2, cex.main = .7)
par(mfrow = c(3,2))
for(i in 1:6) clustergram(Data, k.range = 2:8 , line.width = .004, add.center.points = T)

Resulting with: (press the image to enlarge it)

Repeating the analysis offers even more insights.
First, it would appear that until 3 clusters, the algorithm gives rather stable results.
From 4 onwards we get various outcomes at each iteration.
At some of the cases, we got 3 clusters when we asked for 4 or even 5 clusters.

Reviewing the new plots, I would prefer to go with the 3 clusters option. Noting how the two “upper” clusters might have similar properties while the lower cluster is quite distinct from the other two.

By the way, the Iris data set is composed of three types of flowers. I imagine the kmeans had done a decent job in distinguishing the three.

### Limitation of the method (and a possible way to overcome it?!)

It is worth noting that the current way the algorithm is built has a fundamental limitation: The plot is good for detecting a situation where there are several clusters but each of them is clearly “bigger” then the one before it (on the first principal component of the data).

For example, let’s create a dataset with 3 clusters, each one is taken from a normal distribution with a higher mean:

set.seed(250)
Data <- rbind(
cbind(rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3)),
cbind(rnorm(100,1, sd = 0.3),rnorm(100,1, sd = 0.3),rnorm(100,1, sd = 0.3)),
cbind(rnorm(100,2, sd = 0.3),rnorm(100,2, sd = 0.3),rnorm(100,2, sd = 0.3))
)
clustergram(Data, k.range = 2:5 , line.width = .004, add.center.points = T)

The resulting plot for this is the following:

The image shows a clear distinction between three ranks of clusters. There is no doubt (for me) from looking at this image, that three clusters would be the correct number of clusters.

But what if the clusters where different but didn’t have an ordering to them?
For example, look at the following 4 dimensional data:

set.seed(250)
Data <- rbind(
cbind(rnorm(100,1, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3)),
cbind(rnorm(100,0, sd = 0.3),rnorm(100,1, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3)),
cbind(rnorm(100,0, sd = 0.3),rnorm(100,1, sd = 0.3),rnorm(100,1, sd = 0.3),rnorm(100,0, sd = 0.3)),
cbind(rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,0, sd = 0.3),rnorm(100,1, sd = 0.3))
)
clustergram(Data, k.range = 2:8 , line.width = .004, add.center.points = T)

In this situation, it is not clear from the location of the clusters on the Y axis that we are dealing with 4 clusters.
But what is interesting, is that through the growing number of clusters, we can notice that there are 4 “strands” of data points moving more or less together (until we reached 4 clusters, at which point the clusters started breaking up).
Another hope for handling this might be using the color of the lines in some way, but I haven’t yet figured out how.

### Clustergram with ggplot2

Hadley Wickham has kindly played with recreating the clustergram using the ggplot2 engine. You can see the result here:
http://gist.github.com/439761

I’ve broken it down into three components:
* run the clustering algorithm and get predictions (many_kmeans and all_hclust)
* produce the data for the clustergram (clustergram)
* plot it (plot.clustergram)
I don’t think I have the logic behind the y-position adjustment quite right though.

Here is an example of how it looks:

### Conclusions (some rules of thumb and questions for the future)

In a first look, it would appear that the clustergram can be of use. I can imagine using this graph to quickly run various clustering algorithms and then compare them to each other and review their stability (In the way I just demonstrated in the example above).

The three rules of thumb I have noticed by now are:

1. Look at the location of the cluster points on the Y axis. See when they remain stable, when they start flying around, and what happens to them in higher number of clusters (do they re-group together)
2. Observe the strands of the datapoints. Even if the clusters centers are not ordered, the lines for each item might (needs more research and thinking) tend to move together – hinting at the real number of clusters
3. Run the plot multiple times to observe the stability of the cluster formation (and location)

Yet there is more work to be done and questions to seek answers to:

• The code needs to be extended to offer methods to various clustering algorithms.
• How can the colors of the lines be used better?
• How can this be done using other graphical engines (ggplot2/lattice?) – (Update: look at Hadley’s reply in the comments)
• What to do in case the first principal component doesn’t capture enough of the data? (maybe plot this graph to all the relevant components. but then – how do you make conclusions of it?)
• What other uses/conclusions can be made based on this graph?