This article describes how run a choice model using discrete choice experiment data in Displayr.
- A document containing a choice modeling data set.
- The choice model design in one of the formats listed below.
1. From the menus, select Anything > Advanced Analysis > Choice Modeling and select one of the following models:
- Hierarchical Bayes - this model is more flexible in modeling the characteristics of each respondent, so tends to produce a model that better fits the data.
- Latent class analysis - when you want a segmentation of respondents.
- Multinomial logit - equivalent to a single-class latent class analysis.
A new R output called choice.model will appear in the Report tree on the left, with the following controls in the object inspector form on the right.
2. From the object inspector, select one of the following options for Design source:
- Data set - select variables from a data set to specify the design. Variables need to be supplied corresponding to the version, task and attribute columns of a design. See here for an example.
- Experimental design R output - select an R output in the project to supply the choice model design (created using Automate > Browse Online Library > Conjoint/Choice Modeling > Experimental Design).
- Sawtooth CHO format - supply the design using a Sawtooth CHO file. You'll need to upload the CHO file to the project as a data set (first rename it to end in .txt instead of .cho so that it can be recognized by Displayr). The new data set will contain a text variable, which should be supplied to the CHO file text variable input.
- Sawtooth dual file format - supply the design through a Sawtooth design file (from the Sawtooth dual file format). You'll need to upload this file to the project as a data set. The version, task and attributes from the design should be supplied to the corresponding inputs (similar to the Data set option).
- JMP format - supply the design through a JMP design file. You'll need to upload this file to the project as a data set. The version, task and attributes from the design should be supplied to the corresponding inputs (similar to the Data set option).
- Experiment variable set - supply the design through an Experiment variable set in the project.
For most of these options, you'll also need to provide attribute levels through a spreadsheet-style data editor. This is optional for the JMP format if the design file already contains attribute level names. The levels are supplied in columns, with the attribute name in the first row and attribute levels in subsequent rows.
3. When Data set, Sawtooth dual file format or JMP format are selected, choose the variables from your design data set containing the Version, Task and Attributes variables.
Note, if you are working with an Alchemer (formerly SurveyGizmo) data set, the ResponseID from the conjoint data set is used as Version and Set Number as Task.
4. For most of these options, you'll also need to provide attribute levels through a spreadsheet-style data editor. To enter the attributes, select Enter attribute levels and enter the attribute name and levels in each column:
Note that this is optional for the JMP format if the design file already contains attribute level names. The levels are supplied in columns, with the attribute name in the first row and attribute levels in subsequent rows.
5. Next, you'll need to select the Respondent Data. Whether respondent data needs to be explicitly provided depends on how you supplied the design in the previous step. If an Experiment Question or CHO file was provided, there is no need to separately provide the data, as Experiment Questions and CHO files already contain the choices made by the respondents.
For the other methods of supplying the design, the respondent Choices and the Tasks or Version corresponding to these choices need to be provided from variables in the project. Each variable corresponds to a question in the choice experiment and the variables need to be provided in the same order as the questions.
Note the following:
- If you have a 'None of these' option you will need to code this response as 0 or set Missing Values to Exclude from analyses for the relevant variables in your data set.
- If you have a dual-response 'none' design, you will additionally need to select the corresponding 'Yes/No' questions in the Dual-response 'none' choice field.
- Note that if your conjoint data comes from Alchemer, you will need to first add both the conjoint and respondent data files as data sets and go to Anything > Advanced Analysis > Convert Alchemer (Survey Gizmo) Conjoint Data for Analysis. Displayr will then add the appropriate questions containing the choices and the design version in the respondent data set.
- Instead of using respondent data, there is also an option to use simulated data, by changing the Data source setting to Simulated choices from priors. Please see this blog post for more information on using simulated data.
6. Select one of the following options from the Missing data input which determines how Displayr will deal with missing data, if any:
- Use partial data is the default setting which removes questions with missing data, but keeps other questions for analysis.
- Exclude cases with missing data removes any respondents with missing data
- Error if missing data shows an error message if any missing data is present.
7. In the Model section, if Latent Class Analysis or Hierarchical Bayes is selected as the model Type, enter the Number of classes you want the model to create.
8. OPTIONAL: Enter a value for Questions left out for cross-validation. If there are too many classes, computation time will be long, and the model may be over-fit the data. To determine the amount of overfitting in the data, set Questions left out for cross-validation to be greater than the default of 0. This will allow you to compare in-sample and out-of-sample prediction accuracies in the output.
9. All other options are more advanced and will be covered in a different article. These can be left at their default values.
10. OPTIONAL: Apply a filter to the model by selecting filter variable from the Filter(s) input at the top of the object inspector.
11. OPTIONAL: Apply a weight to the model by selecting weight variable from the Weight input at the top of the object inspector.
12. Press the Calculate button to generate run the model.