Site icon R-bloggers

Contextual Measurement Is a Game Changer

[This article was first published on Engaging Market Research, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.



Adding a context can change one’s frame of reference:

Are you courteous? 
Are you courteous at work? 





Decontextualized questions tend to activate a self-presentation strategy and retrieve memories of past positioning of oneself (impression management). Such personality inventories can be completed without ever thinking about how we actually behave in real situations. The phrase “at work” may disrupt that process if we do not have a prepared statement concerning our workplace demeanor. Yet, a simple “at work” may not be sufficient, and we may be forced to become more concrete and operationally define what we mean by courteous workplace behavior (performance appraisal). Our measures are still self-reports, but the added specificity requires that we relive the events described by the question (episodic memory) rather than providing inferences concerning the possible causes of our behavior.

We have such a data set in R (verbal in the difR package). The data come from a study of verbal aggression triggered by some event: (S1) a bus fails to stop for me, (S2) I miss a train because a clerk gave faulty information, (S3) the grocery store closes just as I am about to enter, or (S4) the operator disconnect me when I used up my last 10 cents for a call. Obviously, the data were collected during the last millennium when there were still phone booths, but the final item can be updated as “The automated phone support system disconnects me after working my way through the entire menu of options” (which seems even more upsetting than the original wording).

Alright, we are angry. Now, we can respond by shouting, scolding or cursing, and these verbally aggressive behaviors can be real (do) or fantasy (want to). The factorial combination of 4 situations (S1, S2, S3, and S4) by 2 behavioral modes (Want and Do) by 3 actions (Shout, Scold and Curse) yields the 24 items of the contextualized personality questionnaire. Respondents are given each description and asked “yes” or “no” with “perhaps” as an intermediate point on what might be considered an ordinal scale. Our dataset collapses “yes” and “perhaps” to form a dichotomous scale and thus avoids the issue of whether “perhaps” is a true midpoint or another branch of a decision tree.

David Magis et al. provide a rather detailed analysis of this scale as a problem in differential item functioning (DIF) solved using the R package difR. However, I would like to suggest an alternative approach using nonnegative matrix factorization (NMF). My primary concern is scalability. I would like to see a more complete inventory of events that trigger verbal aggression and a more comprehensive set of possible actions. For example, we might begin with a much longer list of upsetting situations that are commonly encountered. We follow up by asking which situations they have experienced and recalling what they did in each situation. The result would be a much larger and sparser data matrix that might overburden a DIF analysis but that NMF could easily handle.

Hopefully, you can see the contrast between the two approaches. Here we have four contextual triggering events (bus, train, store, and phone) crossed with 6 different behaviors (want and do by curse, scold and shout). An item response model assumes that responses to each item reflect each individual’s position on a continuous latent variable, in this case, verbal aggression as a personality trait. The more aggressive you are, the more likely you are to engage in more aggressive behaviors. Situations may be more or less aggression-evoking, but individuals maintain their relative standing on the aggression trait.

Nonnegative matrix factorization, on the other hand, searches for a decomposition of the observed data matrix within the constraint that all the matrices contain only nonnegative values. These nonnegative restrictions tend to reproduce the original data matrix by additive parts as if one were layering one component after the other on top of each other. As an illustration, let us say that our sample could be separated into the shouters, the scolders, and those who curse based on their preferred response regardless of the situation. These three components would be the building blocks and those who shout their curses would have their data rows formed by the overlay of shout and curse components. The analysis below will illustrate this point.

The NMF R code is presented at the end of this post. You are encourage to copy and run the analysis after installing difR and NMF. I will limit my discussion to the following coefficient matrix showing the contribution of each of the 24 items after rescaling to fall on a scale from 0 to 1.


Want to and Do Scold< o:p>

Store Closing< o:p>

Want to and Do Shout< o:p>

Want to Curse< o:p>

Do Curse< o:p>

S2DoScold< o:p>

1.00< o:p>
0.19< o:p>
0.00< o:p>
0.00< o:p>
0.00< o:p>
S4WantScold< o:p>

0.96< o:p>
0.00< o:p>
0.00< o:p>
0.08< o:p>
0.00< o:p>
S4DoScold< o:p>

0.95< o:p>
0.00< o:p>
0.00< o:p>
0.00< o:p>
0.11< o:p>
S1DoScold< o:p>

0.79< o:p>
0.37< o:p>
0.02< o:p>
0.05< o:p>
0.15< o:p>

S3WantScold< o:p>

0.00< o:p>
1.00< o:p>
0.00< o:p>
0.08< o:p>
0.00< o:p>
S3DoScold< o:p>

0.00< o:p>
0.79< o:p>
0.00< o:p>
0.00< o:p>
0.00< o:p>
S3DoShout< o:p>

0.00< o:p>
0.15< o:p>
0.14< o:p>
0.00< o:p>
0.00< o:p>

S2WantShout< o:p>

0.00< o:p>
0.00< o:p>
1.00< o:p>
0.13< o:p>
0.02< o:p>
S1WantShout< o:p>

0.00< o:p>
0.05< o:p>
0.91< o:p>
0.17< o:p>
0.04< o:p>
S4WantShout< o:p>

0.00< o:p>
0.00< o:p>
0.76< o:p>
0.00< o:p>
0.00< o:p>
S1DoShout< o:p>

0.00< o:p>
0.12< o:p>
0.74< o:p>
0.00< o:p>
0.00< o:p>
S2DoShout< o:p>

0.08< o:p>
0.00< o:p>
0.59< o:p>
0.00< o:p>
0.00< o:p>
S4DoShout< o:p>

0.10< o:p>
0.00< o:p>
0.39< o:p>
0.00< o:p>
0.00< o:p>
S3WantShout< o:p>

0.00< o:p>
0.34< o:p>
0.36< o:p>
0.00< o:p>
0.00< o:p>

S1wantCurse< o:p>

0.13< o:p>
0.18< o:p>
0.03< o:p>
1.00< o:p>
0.09< o:p>
S2WantCurse< o:p>

0.34< o:p>
0.00< o:p>
0.08< o:p>
0.92< o:p>
0.20< o:p>
S3WantCurse< o:p>

0.00< o:p>
0.41< o:p>
0.00< o:p>
0.85< o:p>
0.02< o:p>
S2WantScold< o:p>

0.59< o:p>
0.00< o:p>
0.00< o:p>
0.73< o:p>
0.00< o:p>
S1WantScold< o:p>

0.40< o:p>
0.22< o:p>
0.01< o:p>
0.69< o:p>
0.00< o:p>
S4WantCurse< o:p>

0.31< o:p>
0.00< o:p>
0.00< o:p>
0.62< o:p>
0.48< o:p>

S1DoCurse< o:p>

0.24< o:p>
0.16< o:p>
0.01< o:p>
0.17< o:p>
1.00< o:p>
S2DoCurse< o:p>

0.47< o:p>
0.00< o:p>
0.00< o:p>
0.00< o:p>
0.99< o:p>
S4DoCurse< o:p>

0.46< o:p>
0.00< o:p>
0.02< o:p>
0.00< o:p>
0.95< o:p>
S3DoCurse< o:p>

0.00< o:p>
0.54< o:p>
0.00< o:p>
0.00< o:p>
0.69< o:p>

As you can see, I extracted five latent features (the columns of the above coefficient matrix). Although there are some indices in the NMF package to assist in determining the number of latent features, I followed the common practice of fitting a number of different solutions and picking the “best” of the lot. It is often informative to learn how the solutions changes with the rank of the decomposition. In this case similar structures were uncovered regardless of the number of latent features. References to a more complete discussion of this question can be found in an August 29th comment from a previous post on NMF.

Cursing was the preferred option across all the situations, and the last two columns reveal a decomposition of the data matrix with a concentration of respondents who do curse or want to curse regardless of the trigger. It should be noted that Store Closing (S3) tended to generate less cursing, as well as less scolding and shouting. Evidently there was a smaller group that were upset by the store closing, at least enough to scold. This is why the second latent feature is part of the decomposition; we need to layer store closing for those additional individuals who reacted more than the rest. Finally, we have two latent features for those who shout and those who scold across situations. As in principal component analysis, which is also a matrix factorization, one needs to note the size of the coefficients. For example, the middle latent features reveals a higher contribution for wanting to shout over actually shouting.

Contextualized Measurement Alters the Response Generation Process

When we describe ourselves or other, we make use of the shared understandings that enable communication (meeting of minds or brain to brain transfer). These inferences concerning the causes of our own and others behavior are always smoothed or fitted with context ignored, forgotten or never noticed. Statistical models of decontextualized self-reports reflect this organization imposed by the communication process. We believe that our behavior is driven by traits, and as a result, our responses can be fit with an item response model assuming latent traits.

Matrix factorization suggests a different model for contextualized self-reports. The possibilities explode with the introduction of context. Relatively small changes in the details create a flurry of new contexts and an accompanying surge in the alternative actions available. For instance, it makes a differences if the person closing the store as you are about to enter has the option of letting one more person in when you plea that it is for a quick purchase. The determining factor may be an emotional affordance, that is, an immediate perception that one is not valued. Moreover, the response to such a trigger will likely be specific to the situation and appropriately selected from a large repertoire of possible behaviors. Leaving the details out of the description only invites the respondents to fill in the blanks themselves,

You should be able to build on my somewhat limited example and extrapolate to a data matrix with many more situations and behaviors. As we saw here, individuals may have preferred responses that generalize over context (e.g., cursing tends to be overused) or perhaps there will be situation-specific sensitivity (e.g., store closings). NMF builds the data matrix from additive components that simultaneously cluster both the columns (situation-action pairings) and the rows (individuals). These components are latent, but they are not traits in the sense of dimensions over which individuals are ranked ordered. Instead of differentiating dimensions, we have uncovered the building blocks that are layered to reproduce the data matrix.

Although we are not assuming an underlying dimension, we are open to the possibility. The row heatmap from the NMF may follow a characteristic Guttman scale pattern, but this is only one of many possible outcomes. The process might unfold as follows. One could expect a relationship between the context and response with some situations evoking more aggressive behaviors. We could then array the situations by increasing ability to evoke aggressive actions in the same way that items on an achievement test can be ordered by difficulty. Aggressiveness becomes a dimension when situations accumulated like correct answers on an exam with those displaying less aggressive behaviors encountering only the less aggression-evoking situations. Individuals become more aggressive by finding themselves in or by actively seeking increasingly more aggression-evoking situations.


R Code for the NMF Analysis of the Verbal Aggression Data Set

# access the verbal data from difR
library(difR)
data(verbal)
 
# extract the 24 items
test<-verbal[,1:24]
apply(test,2,table)
 
# remove rows with all 0s
none<-apply(test,1,sum)
table(none)
test<-test[none>0,]
 
library(NMF)
# set seed for nmf replication
set.seed(1219)
 
# 5 latent features chosen after
# examining several different solutions
fit<-nmf(test, 5, method="lee", nrun=20)
summary(fit)
basismap(fit)
coefmap(fit)
 
# scales coefficients and sorts
library(psych)
h<-coef(fit)
max_h<-apply(h,1,function(x) max(x))
h_scaled<-h/max_h
fa.sort(t(round(h_scaled,3)))
 
# hard clusters based on max value
W<-basis(fit)
W2<-max.col(W)
 
# profile clusters
table(W2)
t(aggregate(test, by=list(W2), mean))

To leave a comment for the author, please follow the link and comment on their blog: Engaging Market Research.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.