- A list of alternatives that you want to test in your MaxDiff experiment.
- In Displayr, select Anything > Advanced Analysis > MaxDiff > Experimental Design.
- Specify the number of Alternatives in the study. For example, let's say you have 10 brands that you want to test, so you would enter the number of alternatives as 10. The alternatives can be labeled, if you wish, or shown as numbers.
- Specify the number of Alternatives per question. A good starting point is to use 5, however, when the alternatives are wordy, it may be better to instead use only 4 alternatives per question. Where the alternatives are really easy to understand, you might choose to go to 6. The key trade-off here is cognitive difficulty for the respondent. The harder the questions, the more likely people are to not consider them very carefully.
- Specify the number of Questions to ask. A rule of thumb provided by the good folks at Sawtooth Software states the ideal number of questions: 3 * Alternatives / Alternatives per question. For my example, I would have 3 * 10 / 5 = 6 questions. There are two conflicting factors to trade off when setting the number of questions. The more questions, the more respondent fatigue, and the worse your data becomes. The fewer questions, the less data, and the harder it is to work out the relative appeal of the alternatives.
- Specify the number of Versions to ask. For this example, I'll use just one version. Where the focus is only on comparing the alternatives (e.g., identifying the best from a series of product concepts), it is a good idea to create multiple versions of the design so as to reduce the effect of order and context effects. Sawtooth Software suggests that if having multiple versions, 10 is sufficient to minimize order and context effects, although there is no good reason not to have a separate design for each respondent. Where the goal of the study is to compare different people, such as when performing segmentation studies, it is often appropriate to use a single version (since if you have multiple designs this is a source of variation between respondents, and may influence the segmentation).
- Enter the number of Repeats (default is 1). Displayr's algorithm includes a randomization component. Occasionally, this can lead to poor designs being found. Sometimes this problem can be remedied by increasing the number of Repeats.
- Check Random order of alternatives to display alternatives in random sequence instead of in the numeric order (or the order of the labels).
- Click the Calculate button to generate the design.
OPTIONAL: If you are satisfied with your design and would like to export it to use in your survey software platform, you can follow these steps:
- Select the Experimental Design output on the Page.
- From the object inspector, go to Inputs and tick Detailed outputs.
3. Go to Publish > Export Pages > Excel.
4. In the Choose which pages to export dropdown, select Exported Selected Page(s) and click Export.
This will generate an Excel file containing your experimental design which can then be uploaded to another program.
How to Use Hierarchical Bayes for MaxDiff
How to Create MaxDiff Model Ensembles
How to Create a MaxDiff Model Comparison Table
How to Save Classes from a MaxDiff Latent Class Analysis
How to Save Respondent-Level Preference Shares from a MaxDiff Latent Class Analysis
How to Convert Alchemer MaxDiff Data for Analysis in Displayr
Article is closed for comments.