Want to share your content on Rbloggers? click here if you have a blog, or here if you don't.
Clustering allows us to better understand how a sample might be comprised of distinct subgroups given a set of variables. While many introductions to cluster analysis typically review a simple application using continuous variables, clustering data of mixed types (e.g., continuous, ordinal, and nominal) is often of interest. The following is an overview of one approach to clustering data of mixed types using Gower distance, partitioning around medoids, and silhouette width.
In total, there are three related decisions that need to be taken for this approach:
For illustration, the publicly available “College” dataset found in the ISLR package will be used, which has various statistics of US Colleges from 1995 (N = 777). To highlight the challenge of handling mixed data types, variables that are both categorical and continuous will be used and are listed below:
 Continuous
 Acceptance rate
 Out of school tuition
 Number of new students enrolled
 Categorical
 Whether a college is public/private
 Whether a college is elite, defined as having more than 50% of new students who graduated in the top 10% of their high school class
The code was run using R version 3.2.2 with the following packages:
Before clustering can begin, some data cleaning must be done:
 Acceptance rate is created by diving the number of acceptances by the number of applications
 isElite is created by labeling colleges with more than 50% of their new students who were in the top 10% of their high school class as elite
Calculating Distance
In order for a yettobechosen algorithm to group observations together, we first need to define some notion of (dis)similarity between observations. A popular choice for clustering is Euclidean distance. However, Euclidean distance is only valid for continuous variables, and thus is not applicable here. In order for a clustering algorithm to yield sensible results, we have to use a distance metric that can handle mixed data types. In this case, we will use something called Gower distance.
Gower distance
The concept of Gower distance is actually quite simple. For each variable type, a particular distance metric that works well for that type is used and scaled to fall between 0 and 1. Then, a linear combination using userspecified weights (most simply an average) is calculated to create the final distance matrix. The metrics used for each data type are described below:
 quantitative (interval): rangenormalized Manhattan distance
 ordinal: variable is first ranked, then Manhattan distance is used with a special adjustment for ties

nominal: variables of k categories are first converted into k binary columns and then the Dice coefficient is used
 pros: Intuitive to understand and straightforward to calculate
 cons: Sensitive to nonnormality and outliers present in continuous variables, so transformations as a preprocessing step might be necessary. Also requires an NxN distance matrix to be calculated, which is computationally intensive to keep inmemory for large samples
Below, we see that Gower distance can be calculated in one line using the daisy function. Note that due to positive skew in the Enroll variable, a log transformation is conducted internally via the type argument. Instructions to perform additional transformations, like for factors that could be considered as asymmetric binary (such as rare events), can be seen in ?daisy.
As a sanity check, we can print out the most similar and dissimilar pair in the data to see if it makes sense. In this case, University of St. Thomas and John Carroll University are rated to be the most similar given the seven features used in the distance calculation, while University of Science and Arts of Oklahoma and Harvard are rated to be the most dissimilar.
Choosing a clustering algorithm
Now that the distance matrix has been calculated, it is time to select an algorithm for clustering. While many algorithms that can handle a custom distance matrix exist, partitioning around medoids (PAM) will be used here.
Partitioning around medoids is an iterative clustering procedure with the following steps:
 Choose k random entities to become the medoids
 Assign every entity to its closest medoid (using our custom distance matrix in this case)
 For each cluster, identify the observation that would yield the lowest average distance if it were to be reassigned as the medoid. If so, make this observation the new medoid.
 If at least one medoid has changed, return to step 2. Otherwise, end the algorithm.
If you know the kmeans algorithm, this might look very familiar. In fact, both approaches are identical, except kmeans has cluster centers defined by Euclidean distance (i.e., centroids), while cluster centers for PAM are restricted to be the observations themselves (i.e., medoids).
 pros: Easy to understand, more robust to noise and outliers when compared to kmeans, and has the added benefit of having an observation serve as the exemplar for each cluster
 cons: Both run time and memory are quadratic (i.e., $O(n^2)$)
Selecting the number of clusters
A variety of metrics exist to help choose the number of clusters to be extracted in a cluster analysis. We will use silhouette width, an internal validation metric which is an aggregated measure of how similar an observation is to its own cluster compared its closest neighboring cluster. The metric can range from 1 to 1, where higher values are better. After calculating silhouette width for clusters ranging from 2 to 10 for the PAM algorithm, we see that 3 clusters yields the highest value.
Cluster Interpretation
Via Descriptive Statistics
After running the algorithm and selecting three clusters, we can interpret the clusters by running summary on each cluster. Based on these results, it seems as though Cluster 1 is mainly Private/Not Elite with medium levels of out of state tuition and smaller levels of enrollment. Cluster 2, on the other hand, is mainly Private/Elite with lower levels of acceptance rates, high levels of out of state tuition, and high graduation rates. Finally, cluster 3 is mainly Public/Not Elite with the lowest levels of tuition, largest levels of enrollment, and lowest graduation rate.
Another benefit of the PAM algorithm with respect to interpretation is that the medoids serve as exemplars of each cluster. From this, we see that Saint Francis University is the medoid of the Private/Not Elite cluster, Barnard College is the medoid for the Private/Elite cluster, and Grand Valley State University is the medoid for the Public/Not Elite cluster.
Via Visualization
One way to visualize many variables in a lower dimensional space is with tdistributed stochastic neighborhood embedding, or tSNE. This method is a dimension reduction technique that tries to preserve local structure so as to make clusters visible in a 2D or 3D visualization. While it typically utilizes Euclidean distance, it has the ability to handle a custom distance metric like the one we created above. In this case, the plot shows the three wellseparated clusters that PAM was able to detect. One curious thing to note is that there is a small group that is split between the Private/Elite cluster and the Public/Not Elite cluster.
By investigating further, it looks like this group is made up of the larger, more competitive public schools, like the University of Virginia or the University of California at Berkeley. While not large enough to warrant an additional cluster according to silhouette width, these 13 schools certainly have characteristics distinct from the other three clusters.
A Final Note: Dealing with Larger Samples and OneHot Encoding
Because using a custom distance metric requires keeping an NxN matrix in memory, it starts to become noticeable for larger sample sizes (> 10,000 or so on my machine). For clustering larger samples, I have found two options:
 Twostep clustering in SPSS: This modelbased clustering approach can handle categorical and continuous variables and utilizes silhouette width (using ruleofthumb cutoffs) to find the optimal number of clusters.
 Using Euclidean distance on data that has been onehot encoded: While much quicker computationally, note that this is not optimal as you run into the curse of dimensionality fairly fast since all categoricals are recoded to become sparse matrices. Note that this approach is actually fairly similar to the dice coefficient found in calculation of Gower distance, except it incorrectly labels 00 as a match. More on this can be seen in this discussion.
Rbloggers.com offers daily email updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/datascience job.
Want to share your content on Rbloggers? click here if you have a blog, or here if you don't.