[This article was first published on Engaging Market Research, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Structural equation models impose causal order on a set of observations. We start with a measurement model: a list of theoretical constructs and a table assigning what is observed (manifest) to what is hidden (latent). Although it is possible to think of this assignment as formative rather than reflective, the default is a causal connection with the latent variables responsible for the observed scores. Next, we draw arrows specifying the cause and effect relationships among the latent variables. All of this is shown in great detail with a customer satisfaction example in the very well-written vignette for the R package semPLS, which uses partial least squares (PLS) to fit structural equations models (sem).

Your focus should be on the causal model and not the estimation technique. PLS is optional, and all the parameters can be estimated using maximum likelihood with the lavaan R package. However, you can get access to the dataset through the semPLS package, and you will not find a better description of this particular example or the steps involved in specifying and testing a SEM.

As always, there are issues. An earlier post raises a number of concerns with this tale of causal links suggesting that we might be asked to assume too much when we impose a directionality on mutually interacting components. For example, when it requires effort to change product or service providers, it might be easier to believe that all competitors are the same and that it is futile to seek a better deal elsewhere. Here, the decision to Buy Again encourages us to rethink our dissatisfaction and raise the ratings over that which would have been given had switching been easier. Such mutual dependencies is represented by undirected graphs, and for social scientists, the R package qgraph provides an introduction.

My goal in this post is a modest one: to demonstrate that one can learn a great deal from a series of customer ratings without needing to force the data into a causal model. This is achieved by examining the following partial correlation network.

You should recall that a graph is a visual display of some adjacency matrix. In this case we define adjacency as the partial correlation between two nodes after controlling for all the other nodes in the graph. Actually, our adjacency matrix is a bit more complicated because we applied the graphical lasso to obtain our estimates. The details are important, yet one can learn a great deal from the graph knowing little more than that the edges show us conditional association after removing the other nodes and that we have made some effort to eliminate as many edges as possible (a sparse undirected graph).

All the R code needed to replicate this analysis appears at the end of this post. One of the original 24 items, # 9 SwitchforPrice, was removed because it had no edge to any of the other nodes in this partial correlation network (the semPLS documentation reveals that the question had a unique format).

One way to start is to identify the thickest edges connecting the remaining 23 customer perception, satisfaction and loyalty ratings. Unsurprisingly, good value and fair price “hang together” since endorsing one and rejecting the other would seem to be a contradiction. Similarly, stability is a key component of network quality, reliability defines service quality, and we do not recommend that which we are unwilling to buy again. These single edges connecting two ratings with common meanings may not be that informative.

What is interesting, however, is that we can read “the customer’s mind” from the structure of the undirected graph.  First, all the quality measures form a grouping toward the left of the graph: stable, network quality, reliability, service quality, and overall quality. As we move toward the right, we encounter overall satisfaction along with its companion positive perceptions of trusted and fulfilled. In the region just above fall the product and service attributes with range of products and services, innovative, and customer service. Corporate responsibility is more toward the left with the loyalty measures below (e.g., buy again and recommend).

In general, expectations (go wrong, quality, and meet needs) are toward the top and behaviors near the bottom (compliant handling, recommend, and buy again). The most basic quality indicators are found on the left with the extras, such as good citizenship, appearing on the right (concerned, responsible, fair price, and good value).

Over time, customers form impressions and reach conclusions about the companies providing them goods and services. These attributions are mutually supportive and create a system of interdependencies that seeks an equilibrium. Disturbing that equilibrium anywhere within the system will have its consequences. A company that provides small incentives to current customers in order to encourage them to recruit new customers gets both the new customers and recommending customers with higher satisfaction and improved impressions. Recommendation is more than the result of a sequential causal process with satisfaction as an input. The incentive is an intervention with satisfaction as the outcome. The causality is mutual.

library("semPLS")
data(mobi)

# descriptive names for graph nodes
names(mobi)<-c("QualityExp",
"MeetNeedsExp",
"GoWrongExp",
"OverallSat",
"Fulfilled",
"IsIdeal",
"ComplaintHandling",
"SwitchForPrice",
"Recommend",
"Trusted",
"Stable",
"Responsible",
"Concerned",
"Innovative",
"OverallQuality",
"NetworkQuality",
"CustomerService",
"ServiceQuality",
"RangeProdServ",
"Reliability",
"ClearInfo",
"FairPrice",
"GoodValue")

library("qgraph")

# Calculates Sparse Partial Correlation Matrix
sparse_matrix<-EBICglasso(cor(mobi[,-9]), n=250)

# Plots results
ug<-qgraph(sparse_matrix, layout="spring",
labels=names(mobi[-9]), label.scale=FALSE,
label.cex=1, node.width=.5)