Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

We run our cluster analysis with great expectations, hoping to uncover diverse segments with contrasting likes and dislikes of the brands they use. Instead, too often, our K-means analysis returns the above graph of parallel lines indicating that the pattern of high and low ratings are the same for everyone but at different overall levels. The data come from the R package semPLS and look very much like what one sees with many customer satisfaction surveys.

I will not cover any specifics about the data, but instead refer you to earlier discussions of this dataset, first in a post showing its strong one-dimensional structure using biplots and later in an example of an undirected graph or Markov network displaying brand associations.

We will begin with the mean ratings for the four lines in the above graph and include a relatively small fifth segment in the last column with a different narrative. Ordering the 23 items from lowest to highest mean scores over the entire sample makes both the table below and the graph above easier to read.

 not at all a little some a lot pricey 9% 27% 34% 21% 10% FairPrice 4.1 5.1 6.4 8.2 5.6 BuyAgain 3.0 6.9 8.7 9.7 3.8 Responsible 4.3 6.2 6.9 8.1 7.0 GoodValue 4.8 5.7 7.3 8.6 7.0 ComplaintHandling 4.4 6.1 7.2 8.9 7.8 Fulfilled 4.8 6.0 7.5 8.6 7.8 IsIdeal 4.6 6.2 7.7 9.0 7.8 NetworkQuality 5.6 6.2 7.4 8.4 8.0 Recommend 3.8 6.7 8.4 9.6 7.2 ClearInfo 4.6 6.5 7.9 9.2 8.6 Concerned 5.3 6.4 8.1 9.0 8.1 QualityExp 6.0 7.1 7.6 8.6 7.9 CustomerService 4.8 6.7 8.0 9.3 8.5 MeetNeedsExp 6.1 7.1 7.3 8.7 8.4 GoWrongExp 7.0 6.2 7.5 8.5 8.5 Trusted 6.1 6.6 7.8 9.1 8.4 Innovative 5.8 7.4 8.1 9.2 8.2 Reliability 6.1 6.8 7.9 9.2 8.7 RangeProdServ 6.2 7.1 8.0 9.2 8.3 Stable 7.1 6.7 7.8 9.1 8.3 OverallQuality 6.3 7.0 8.2 9.2 8.5 ServiceQuality 6.3 7.1 7.9 9.4 8.6 OverallSat 6.4 7.3 8.2 8.9 8.7

You can pick any row in this table and see that the first four segments with 90% of the customers are ordered the same. The first cluster is simply not at all happy with their mobile phone provider. They give the lowest Buy Again and Recommend ratings. In fact, with only two small exceptions, they uniformly give the lowest scores. For every row the second column is larger (note the two discrepancies already mentioned), followed by an even bigger third column, and then the most favorable fourth column. Successful brands have loyal customers, and at least one out of five customers in this data have “a lot” of love with a mean ratings of 9.7 on a 10-point scale.

You can see why I labeled these four segments with names suggesting differing levels of attraction. Each group has the same profile, as can be seen in the largely parallel lines on our graph. The good news for our providers is that only 9% are definitely at risk. The bad news is that another 10% like the product and the service but will not buy again, perhaps because the price is not perceived as fair (see their graph below with a dip for the second variable, Buy Again, and a much lower score than expected for the first variable, Fair Price, given the elevation of the rest of the curve).

Some might argue that what we are seeing is merely a measurement bias reflecting a propensity among raters to use different portions of the scale. Does this mean that 90% of the customers have identical experiences but give different ratings due to some scale-usage predisposition? If it is a personality trait, does this mean that they use the same range of scale values to rate every brand and every product? Would we have seen individuals using the same narrow range of scores had the items been more specific and more likely to show variation, for example, if they had asked about dropped calls and dead zones rather than network quality?

Given questions without any concrete referent, the uniform patterns of high and low ratings across the items are shaped by a network of interconnected perceptions resulting from a common technology and a shared usage of that technology. In addition, one overhears a good deal of discussion about the product category in the media and from word-of-mouth so that even a nonuser might be aware of the pros and cons. As a result, we tend to find a common ordering of ratings with some customers loving it all “a lot” and others “not at all.” Unless customers can provide a narrative (e.g., “I like the product and service, but it costs too much”), they will all reproduce the same profile of strengths and weaknesses at varying levels of overall happiness. That is, satisfied or not, almost everyone seems to rate value and price fairness lower than they score overall quality and satisfaction.

Finally, my two prior posts cited earlier may seem to paint a somewhat contradictory picture of customer satisfaction ratings. On the one hand, we are likely to find a strong first principal component indicating the presence of a single dimension underlying all the ratings. Customer satisfaction tends to be one-dimensional so that we might expect to observe the four clusters with parallel lines of ratings. Satisfaction falls for everyone as features and services become more difficult for any brand to deliver. On the other hand, the graph of the partial correlations suggests a network of interconnected pairs of ratings after controlling for the all the remaining items. One can identify regions with stronger relationships among items measuring quality, product offering, corporate citizenship, and loyalty.

Both appear to be true. Rating with the highest partial intercorrelations form local neighborhoods with thicker edges in our undirected graph. Although some nodes are more closely related, all the variables are still connected either directly with a pairwise edge or indirectly through a separating node. Everything is correlated, but some are more correlated than others.

R code needed to reproduce these tables and plots.

library("semPLS")
data(mobi)

# descriptive names for graph nodes
names(mobi)<-c("QualityExp",
"MeetNeedsExp",
"GoWrongExp",
"OverallSat",
"Fulfilled",
"IsIdeal",
"ComplaintHandling",
"SwitchForPrice",
"Recommend",
"Trusted",
"Stable",
"Responsible",
"Concerned",
"Innovative",
"OverallQuality",
"NetworkQuality",
"CustomerService",
"ServiceQuality",
"RangeProdServ",
"Reliability",
"ClearInfo",
"FairPrice",
"GoodValue")

# kmeans with 5 cluster and 25 random starts
kcl5<-kmeans(mobi[,-9], 5, nstart=25)

# cluster profiles and sizes
cluster_profile<-t(kcl5$centers) cluster_size<-kcl5$size

# row and column means
row_mean<-apply(cluster_profile, 1, mean)
col_mean<-apply(cluster_profile, 2, mean)

# Cluster profiles ordered by row means
# columns sorted so that 1-4 are increasing means
# and the last column has low only for buyagain & fairprice
# Warning: random start values likely to yield different order
sorted_profile<-cluster_profile[order(row_mean),c(4,3,1,2,5)]

# reordered cluster sizes and profiles
cluster_size[c(4,3,1,2,5)]/250
round(sorted_profile,2)

# plots for first 4 clusters
matplot(sorted_profile[,-5], type = c("b"), pch="*", lwd=3,
xlab="23 Brand Ratings Ordered by Average for Total Sample",
ylab="Average Ratings for Each Cluster")
title("Loves Me Little, Some, A Lot, Not At All")

# plot of last cluster
matplot(sorted_profile[,5], type = c("b"), pch="*", lwd=3,
xlab="23 Brand Ratings Ordered by Average for Total Sample",
ylab="Average Ratings for Last Cluster")
title("Got to Switch, Costs Too Much")