# The number of clusters in Hierarchical Clustering

**Chen-ang Statistics » R**, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Cluster analysis is widely applied in data analysis. Obviously hierarchical clustering is the simple and important method to do clustering. In brief, hierarchical clustering methods use the elements of a proximity matrix to generate a tree diagram or dendogram. From the tree diagram, we can draw our own conclusions about the results of clustering. However, when the cluster analysis solution is given, the question is how to determine the number of clusters k. For some value of k, we want to determine whether the clusters are sufficiently separated so as to illustrate minimal overlap. There is no doubt that we can choose an appropriate threshold value or use scatter diagram to determine that. Furthermore, statistic value is also very useful to determine the value of k. There are some valuable test statistics or pseudo test statistics as follow. In addition, I also provide a corresponding R function to implement.

**1 statistic**

The for k clusters is defined as

T, P_k means total sum of squares, within cluster sum of squares, respectively. For n clusters, obviously each so that . As the number of clusters decreases from n to 1 they should become more widely separated. A large decrease in would represent a distinct join. Actually, we also can use semipartial R^2 statistic to reach our goal.

**2 semipartial statistic**

The semipartial for k clusters is defined as

is equal to and means the sum of squares in cluster .

**3 pseudo statistic**

The pseudo statistic for k clusters is defined as

pseudo

If pseudo decreases with k and reaches a maximum value, the value of k at the maximum or immediately prior to the point may be a candidate for the value of k.

**4 pseudo statistic**

The pseudo is defined as

pseudo

for joining cluster with each having and elements.

**Implementation **

As a matter of fact, SAS enables us to get the value of these statistics easily through the PROC CLUSTER and PROC TREE. However, it is not convenient to calculate them in R. Last semester, as a teaching assistant of the course of multivariate statistical analysis, the professor gave these assignments(writing R funtions to calculate one of these test statistics) to the students. In order to correct their codes, I also write a R function which can calculate all of these test statistics at the same time. The output of this function is similar with the SAS output. If your want to view the source code, please click this link.

**Further discussion**

Besides writing function, a package called NbClust offers a simper and better way to determine the number of clusters. It provides 30 popular indices and also proposes to user a recommended number of clusters. More details could be found from the reference manual of this package.

library(NbClust); data(USArrests); NbClust(USArrests,diss="NULL",distance="euclidean", min.nc=2,max.nc=8,method="ward",index="pseudot2", alphaBeale=0.1);

Please note that the output is a little different from the SAS output.

**Reference**

Timm, Neil H. Applied multivariate analysis. Springer, 2002.

**leave a comment**for the author, please follow the link and comment on their blog:

**Chen-ang Statistics » R**.

R-bloggers.com offers

**daily e-mail updates**about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.

Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.