When Discrete Choice Becomes a Rating Scale: Constant Sum Allocation

[This article was first published on Engaging Market Research, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Why limit our discrete choice task to next purchase when we can ask about next ten purchases?  It does not seem appropriate to restrict choice modeling to one selection only when repeat purchases from the same choice set are made by the same individual buying different products at different times.  Similarly, a purchasing agent or a company buyer will make multiple purchases over time for different people.  Why not use choice modeling for such multiple purchases?

Everyone seems to be doing it, although they might use different names, calling it a constant sum, a chip allocation, or simply shares.  For example, the R package ChoiceModelR allows the dependent variable to be a proportion or share.  Statistical Innovations’ Latent Gold Choice software permits constant sum data.  Sawtooth Software prefers to call it chip allocation in its CBC/HB system because one can “normalize” whenever numbers have been assigned to the alternatives before analyzing the data.

A specific example might be helpful.  Suppose that we were conducting a discrete choice study varying the size and price of six different coffee menu items, we might use the following directions.
“Please assume that every week day you buy your coffee from the same small vendor offering only six possible selections.   I will give you a menu listing six different items plus the option of getting your coffee somewhere else.  I would like you to tell me how many of the different alternatives you would select over the next two weeks?  It is as if you had 10 chips to allocate across the seven alternatives.  If you would buy the same coffee every day, you would place 10 on that one alternative.  If every day you would get your coffee somewhere else, you would place 10 on the ‘Get Somewhere Else’ alternative.  You are free to allocate the 10 chips across the seven alternatives in any way you wish as long as it shows what you would buy or not buy over the next 10 days.”
On the surface, it makes sense to treat the choice exercise as yielding not one choice but ten separate choices.  It is as if the respondent made ten independent purchases, one each day over a two week period.  That is, we could pretend that the respondent actually saw 10 different choice sets, all with the same attribute levels, and made 10 separate choices.  You do not need to analyze the data in this manner, but it is probably the most straightforward way of thinking about the task and the resulting choice data.  Thus, the data remain essentially the same regardless of whether you analyze the numbers as replicate weights (Latent Gold Choice) or introduce a “total task weight” (Sawtooth CBC/HB).

If you have read my last post on incorporating preference construction into the choice modeling process, you may have already guessed that people are probably not very good at predicting their future behavior.  Diversification bias is one of the problems respondents encounter.  When individuals are asked to decide what they intend to consume over the course of several time periods in the future, their selections are more varied than what they actually will select when observed over the same time periods.  Thus, going to a grocery store once a week and making all your purchases for an entire week of dinners will produce more variety than deciding what you are in the mood for each evening and making separate trips to the store.  Fortunately, we know a great deal about how we simulate future events and predict our preferences for the outcomes of those simulations.  As retrospection is remembering the past, prospection is experiencing the future.  Unsurprisingly, systematic errors limit what we can learn about actual future behavior from today’s intentions.

This is another example of choice architecture, which was discussed in the previous post.  Choice is a task, and small changes in the task may have a major impact on the results.  We could stop at this point and reach the conclusion that asking about next 10 purchases only makes sense in those situations where future choices are all made at one point in time (not a very common occurrence).  Clearly, it makes little sense to ask respondents to participate in a choice study whose findings cannot be generalized to the marketplace of ultimate interest.  However, we do not wish to overlook another important difference between assigning 10 points among the alternatives and asking respondents to perform 10 different choice tasks.  That is, diversification bias occurs when we ask each respondent to complete a Monday choice task, then a Tuesday choice task, and so on.  This was not our choice task in the constant sum allocation.

When respondents are debriefed, they do not report that they spent the time to think about each of the 10 days separately.  They do not imagine filling in their daily menu planner.  Instead, they talk about the relative preferences for the alternatives in the choice set.  If only one alternative is acceptable, it gets 10 points.  If two alternatives are equally desired, then each receives a score of five.  The researcher begins believing that this was a choice study, but respondents simplify the task by treating it as a typical constant sum and transforming it into relative preference ratings. 

One might argue that all this improves our research because we are gathering more information about the relative preference standing of the alternatives.  However, if our goal is making money by selling coffee, it does not help to add a menu item that is never purchased because it is always a close second-place finisher.  Moreover, the constant sum leads the respondent astray to make distinctions and consider attributes that would not have occurred spontaneously when actual purchases were made.

There is an element of problem solving in choice modeling.  The respondent is presented with the choice task.  They are given the instructions, the choice sets, and told how to provide their response.  I have deliberately avoided showing the choice sets in order not to introduce an additional level of complexity into this post (e.g., the dynamic effects of repeated presentations of what might be complex choice descriptions).  But even with this somewhat abridged description, we can recognize that the choice task defines the rules of the game.

Preferences are constructed.  The choice task elicits memories of past experiences in similar situations.  This alone may be sufficient to generate a response, or additional problem solving may be needed, sometimes a good amount of simplification and sometimes extensive restructuring of the information provided.  It depends on the choice task and the respondent.  As market researchers, we must make the effort to ensure that our experimental game matches the game played by consumers in the marketplace.

To leave a comment for the author, please follow the link and comment on their blog: Engaging Market Research.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Never miss an update!
Subscribe to R-bloggers to receive
e-mails with the latest R posts.
(You will not see this message again.)

Click here to close (This popup will not appear again)