This article describes how to design experiments for choice-based conjoint analysis (also known as choice modeling).
Requirements
- A list of brand attributes and levels for each attribute.
Please note these steps require a Displayr license.
Method
1. Select Anything > Advanced Analysis > Choice Modeling > Experimental Design.
2. From the inputs on the right. select an Algorithm for creating the design. The default is Balanced overlap which generates a design where there is a high level of balance for each respondent. Algorithms options include:
- Random - Randomly chooses levels, only ensuring alternatives are not identical within a question.
- Shortcut - The design is built with each alternative consisting of the least frequently used Level for each attribute. If levels are equally frequent, the least used level within the question is selected, or else a random choice is made.
- Complete enumeration - For each alternative, every possible alternative is evaluated and the one with the lowest cost selected. The cost of an alternative depends on its incremental impact upon the design in terms of a combination of single level balance, pairwise level balance and level overlap within questions.
- Balanced overlap - As per Complete enumeration except level overlaps are less strongly penalized.
- Efficient - The design is chosen using a recent algorithm to optimize the D-error, so that the variance of the model parameter estimates is minimized.
- Partial profiles - Uses the same algorithm as Efficient except that designs generated with this algorithm can have a specified number of attributes set constant.
- Alternative specific - Random - Creates a design with attributes that are specific to each alternative. Random levels are chosen for each attribute.
- Alternative specific - Federov - Creates a design with attributes that are specific to each alternative. The design is optimized to maximize the information about the model parameters, given the responses.
3. Enter the number of Questions per respondent. This is the number of questions to show to each respondent.
4. Enter the number of different design Versions that you want to generate. The default is 1.
5. Only check the Alternative are labeled by first attribute checkbox if the first attribute in your list is an alternative label. If checked, a labeled design is produced where each level of the first attribute is used exactly once in each question. Hence the number of alternatives per question (excluding None) is equal to the number of levels of the first attribute.
6. Enter the number of Alternatives per question (excluding None(s)) that respondents will be shown.
7. Enter the number of None alternatives. This is the number of None alternatives added to each question.
8. If a non-zero value is entered at None alternatives, enter the None alternative positions, which is the position(s) of the None alternative(s) as a comma delimited list. For example '1, 3' means that there are two None alternatives per question, which are the first and third alternatives. If not supplied, the None alternative(s) will be the last alternative(s) of each question, or else the number of supplied positions must match the number specified in None alternatives.
9. For Attributes and levels, select one of the following options
-
Enter in spreadsheet - enter the attributes and levels through a spreadsheet-style data editor with one attribute per column followed by each attribute level by clicking the Add attributes and levels button.
- Enter attributes individually - enter each attribute name followed by a list of levels, delimited by commas. When a list is entered, another box appears for the next attribute. For Alternative specific designs, the Labels of alternatives is a comma-delimited list of alternatives. Attributes per alternative is a comma-delimited list of integers specifying the number of attributes of each alternative. The following fields should each contain a comma-delimited list of labels followed by levels for each attribute.
- Enter number of levels per attribute - A comma-delimited list of levels per attribute. This is a quick method for specifying a design as a comma-delimited list of the number of levels per attribute, e.g., 2,2,3,3. Attributes are labeled Attribute1, Attribute2, etc and levels are labeled 1, 2, 3, etc.
10. For Efficient and Partial profiles designs that use the Attributes and levels spreadsheet option, you can additionally add mean and/or standard deviation priors to optimize your design by placing a column called mean or sd adjacent to each attribute you wish to apply a prior to. Best practice is to do this with all attributes. Priors are designed to reflect the utilities of the survey respondents whereby a high utility for an attribute level indicates a stronger preference and vice versa. The first utility should be 0. The range should be between -3 and 3 relative to the first utility based on your industry knowledge and judgment.
11. Tick the Enter prohibited alternatives checkbox if there are combinations of attribute levels that you want to prohibit from appearing in the same choice option. You will enter a table of prohibited alternatives with one prohibition per row. The columns contain levels of the attributes in the same order that the attributes have been specified in Attributes and levels. If a level is All or blank then all levels of the attribute are prohibited when in combination with the other specified attribute levels. Not available when Algorithm is Efficient or Shortcut. Note that the alternatives that you want to prohibit from appearing together must be on the same row with the attribute levels in the same column as entered in the original attributes and levels spreadsheet.
12. Constant attributes - The number of attributes to keep constant when the partial profiles algorithm is selected.
13. Extensive search - Whether to use the extensive version of the partial profiles algorithm. This is many times slower but the resulting design is usually more optimal.
14. Maximum candidate questions - When the Alternative specific - Federov algorithm is chosen, the design is selected from the enumeration of all possible questions multiplied by the number of versions. For a design with many attributes and/or many levels and versions, this enumeration may be excessively large. Maximum candidate questions sets a limit on the number of questions to consider when building the design. A random sample of Maximum candidate questions are drawn, from which the final design is optimized. When this value is more than the enumeration it has no effect. When it is lower than the enumeration it increases the speed for a decrease in accuracy.
15. Enter the expected Sample size for the experiment. This is the anticipated number of responses. This number of random responses is used to calculate the standard errors. The default is 300, however, a warning may be generated, similar to the one below, recommending you increase the sample size if the standard error for any of the attribute levels is greater than 0.05.
16. Click the Calculate button to generate the choice model experiment design.
Note that additional diagnostics describing balances and overlaps and standard errors provide more information. The A-error metric shown is proportional to the average standard error of the parameters from a multinomial logit analysis with the design. A lower A-error translates to lower average standard errors of the parameters.
DIAGNOSTICS
Balances and overlaps - Creates an output containing level balances and overlap between levels of the design.
Numeric design - Creates an output containing a numeric version of the design.
Standard errors - Creates an output containing the parameter fits and their standard errors for the design.
References
- Hoare, J. (2018, July 20). How Good is your Choice Model Experimental Design? [Blog post]. Accessed from https://www.displayr.com/how-good-is-your-choice-model-experimental-design/.
- Bock, T (2017, July 25). How to Check an Experimental Design (MaxDiff, Choice Modeling) [Blog post]. Accessed from https://www.displayr.com/check-experimental-design/.
- Yap, J. (2018, September 18). How to Use Simulated Data to Check Choice Model Experimental Designs Using Displayr [Blog post]. Accessed from https://www.displayr.com/simulated-data-designs-displayr/.
- Hoare, J. (2018, September 5). Algorithms to Create your Choice Model Experimental Design [Blog post]. Accessed from https://www.displayr.com/algorithms-to-create-your-choice-model-experimental-design/.
- Yap, J. (2018, September 12) The Partial Profiles Algorithm for Experimental Designs [Blog post]. Accessed from https://www.displayr.com/partial-profiles-algorithm/.
- Yap, J. (2018, September 12) The Efficient Algorithm for Choice Model Experimental Designs [Blog post]. Accessed from https://www.displayr.com/efficient-algorithm/.
- Hoare, J. (2018, August 16). How to Create Alternative-Specific Choice Model Designs in Displayr [Blog post]. Accessed from https://www.displayr.com/how-to-create-alternative-specific-choice-model-designs-in-displayr/.
- D. P. Cuervo, R. Kessels, P. Goos and K. Sörensen (2016). An integrated algorithm for the optimal design of stated choice experiments with partial profiles. Transportation Research Part B 93.
Next
How to do the Statistical Analysis of Choice-Based Conjoint Data
How to Set Up a Choice-Based Conjoint Analysis in Qualtrics
How to Preview a Choice-Based Conjoint Questionnaire
How to Compare Multiple Choice Models
How to Create a Utilities Plot
How to Save Utilities from a Choice Model
How to Save Class Membership from a Choice Model
How to Create a Simulator for Choice Model
How to Create an Optimizer for Choice-Based Conjoint