Requirements
- A list of alternatives that you want to test in your MaxDiff experiment.
Method
- In Displayr, From the Report tree select + > Advanced Analysis > MaxDiff > Experimental Design.
- Specify the number of Alternatives in the study. For example, let's say you have 10 brands to test; you would enter the number of alternatives as 10. The alternatives can be labeled, if you wish, or shown as numbers.
- Specify the number of Alternatives per question. A good starting point is to use 5; however, when the alternatives are wordy, it may be better to use only 4 alternatives per question. Where the alternatives are really easy to understand, you might choose to go to 6. The key trade-off here is cognitive difficulty for the respondent. The harder the questions, the more likely people are not to |considr them very carefully.
- Specify the number of Questions to ask. A rule of thumb provided by the good folks at Sawtooth Software states the ideal number of questions: 3 * Alternatives / Alternatives per question. For my example, I would have 3 * 10 / 5 = 6 questions. There are two conflicting factors to trade off when setting the number of questions. The more questions, the more respondent fatigue, and the worse your data becomes. The fewer questions, the less data, and the harder it is to work out the relative appeal of the alternatives.
- Specify the number of Versions to ask. For this example, I'll use just one version. When the focus is only on comparing alternatives (e.g., identifying the best from a series of product concepts), it is a good idea to create multiple versions of the design to reduce the effects of order and context. Sawtooth Software suggests that if having multiple versions, 10 is sufficient to minimize order and context effects, although there is no good reason not to have a separate design for each respondent. When the goal of the study is to compare different people, such as in segmentation studies, it is often appropriate to use a single version (since multiple designs introduce variation between respondents and may influence the segmentation).
- Enter the number of Repeats (default is 1). Displayr's algorithm includes a randomization component. Occasionally, this can result in poor designs being identified. Sometimes this problem can be remedied by increasing the number of Repeats.
- Check Random order of alternatives to display alternatives in a random sequence instead of in the numeric order (or the order of the labels).
- Click the Calculate button to generate the design.
OPTIONAL: If you are satisfied with your design and would like to export it to use in your survey software platform, you can follow these steps:
- Select the Experimental Design output on the Page.
- From the object inspector
, go to Data and tick Detailed outputs.
3. Go to Share > Export Report > Excel.
4. In the Choose which pages to export dropdown, select Exported Selected Page(s) and click Export.
This will generate an Excel file containing your experimental design, which can then be uploaded to another program.
Next
How to Use Hierarchical Bayes for MaxDiff
How to Create MaxDiff Model Ensembles
How to Create a MaxDiff Model Comparison Table
How to Save Classes from a MaxDiff Latent Class Analysis
How to Save Respondent-Level Preference Shares from a MaxDiff Latent Class Analysis
How to Convert Alchemer MaxDiff Data for Analysis in Displayr
UPCOMING WEBINAR: The Roadmap for Market Researchers in the Age of AI