I can see from those of you who have contacted me that there is still some confusion about the claims made by Sawtooth that MaxDiff estimates can be converted to ratio-scale probabilities. Many of you seem to believe that if attribute A has a MaxDiff score of 20 and attribute B has a MaxDiff score of 10, then A must be twice as important as B. A quick example should convince you that this is not the case.
We can all agree that weight is measured on a ratio scale. A 20 pound object is twice as heavy as a 10 pound object. Below you will find three sets of 10 objects each. The 10 objects in the three sets have very different weights, but they have the same rank order. Now, if MaxDiff yielded a ratio scale that measured weight (importance) and summed to 100, we would expect to see the MaxDiff scores in the columns adjacent to the weights. That is, I simply rescaled the weights of the objects so that they summed to 100 (100/55, 100/228 and 100/145 for each set, respectively).
However, we have a problem. The three sets would always produce the same maximum difference data. Construct any choice set of four objects (e.g., B, E, G, and J). B would be worst, and J would be best in all three sets. Moreover, this would be true for any choice set with any block size. MaxDiff uses only rank order information to determine best and worst. Therefore, regardless of the size of the differences among the weights (importance), we will always get the same maximum difference selections and the same MaxDiff scores.
The MaxDiff claim of ratio scaling refers to the number of times an object is selected from the choice sets. But selection is determined solely by rank order. As we have just seen, magnitudes of the differences play no role whatsoever. When my clients spend the time to work through this example, their conclusion is always the same: “Why am I spending all this time and money if I am only getting a rank ordering based on stated importance?” Why indeed?