This article describes how to read the standard outputs from choice models in Displayr. The article explores an example output (below), describing its different elements:
- Model type
- Predictive accuracy and RHL
- Mean
- Histograms and standard deviations
- Additional statistics
Please note these steps require a Displayr license.
Example output
The output below is from an analysis in Displayr of a choice-based conjoint study looking at job choice.
Model type
The title of the output indicates what model has been used. In the example above, Hierarchical Bayes has been used. The type of a model is changed in the Data > MODEL > Type setting, with the other two model types being Latent class analysis and Multinomial logit.
Predictive accuracy and RHL
The top of the section of the output shows predictive accuracy. In the example above, the model predicts the choices of the respondents correctly 84.7% of the time, and the root likelihood (RLH) is 0.533.
Where MODEL > Questions for cross-validation is set to a number of 1 or more, the output at the top of the page is modified to show the cross-validation accuracy first (57.5%) and then the in-sample (76.9%). Similarly, the RLH is also reported for the cross-validation sample and in-sample.
For more information about this, see How to Compare Choice Models (Cross-Validation) on The Data Story Guide.
Mean
The column labeled Mean shows the average coefficient estimated for each attribute level. There are a number of alternative names for the mean, including:
- Utility.
- The mean or average utility.
- Coefficient.
- Partworth utility.
- The mean coefficient for the attribute level.
- The mean of the individual-level coefficients for the attribute level.
In general, there are a variety of different types of attributes and associated "means" that can appear in this table:
- Alternative specific constants
- Coefficients for categorical attributes
- Coefficients for numeric attributes
- Coefficients for interactions
Alternative-specific constants
The first attribute shown in the output is, by default, labeled as Alternative. It is created by the model to estimate bias in choosing an alternative shown based on its position. It will not appear if MODEL > Alternative-specific constants is unchecked. Alternative 1 indicates the first alternative shown in the choice questions. The mean for alternative 1 is 0. The term "mean" is not standard.
As discussed in Introduction to the Multinomial Logit Model, the analysis sets the first alternative's mean at 0 and estimates the other coefficients relative to this. Looking at alternative 2 from the example output, we can see its mean is the same as 0.4. This tells us that alternative 2 was, all else being equal, more likely to be chosen than alternative 1. We can quantify this degree of preference by using the logit transformation. The probability that Alternative 2 is chosen relative to Alternative 1 is exp(0.4) / (exp(0.0) + exp(0.4)) = 60%.
Alternative 3 is marginally the most preferred with a utility of 0.5, and alternative 4 has a utility of 0.2. The probability that somebody will choose alternative 3 relative relative to the other 4 alternatives is then
exp(0.5) / (exp(0.0) + exp(0.4) + exp(0.5) + exp(0.2)) = 31%. Similarly, the probability of choosing alternative 1, the least preferred alternative, is exp(0.0) / (exp(0.0) + exp(0.4) + exp(0.5) + exp(0.2)) = 19%.
An alternative specific constant measures the appeal of specific alternatives presented in the choice questions. With labeled choice questions, the alternative specific constants often represent a key attribute such as brand. With unlabeled choice questions, where each column is a bundle of attribute levels, the alternative specific constants capture response biases.
In an ideal world, all alternative specific constants, except for 'none of these' options, would have a mean of 0. However, in practice, this is rarely the case and it is useful to include the alternative specific constants as they may remove the response biases from the rest of the analyses (without the alternative specific constants, this bias may instead cause other attribute means to be incorrectly calculated).
Presumably, the mean is lower for alternatives 1 and 4 because many people were viewing the alternatives on mobile devices and it was just easier to see the middle alternative. I write "presumably" because I have no way to know for certain.
Coefficients for categorical attributes
The output below shows the distribution of utilities for an attribute measuring the appeal of different salary increases. The first column shows the means. As with Alternative, the first level of the first attribute is set to 0. As we would expect, higher levels of price have higher mean coefficients (i.e., utilities). However, the relationship is nonlinear, with the difference between current and 5% higher being 0.3, whereas between 5% and 10% the difference is 0.6.
The mean alone is often best viewed as a column chart. The visualization below shows the means from above (except for Alternative) as a column chart. To create this visualization in Displayr, use Visualization > Choice Modeling and MaxDiff Diagnostic Plots > Utilities Plot (also found under DIAGNOSTICS > Utilities Plot on the output) and changing Chart Type > Column.
We can compare utilities across attributes. For example from above, the utility of an employer currently being carbon neutral relative to not having planned to reduce carbon neutrality is a bit less appealing than the utility of a 10% pay increase.
Coefficients for numeric attributes
Looking at the utilities (means) for Salary, the pattern seems to be nonlinear. The benefit of increasing salary by 5% is much smaller than the benefit of increasing by 10% versus 5%. Such a result may be an insight or an error due to sampling noise. If it is believed to be an error rather than an insight, it is appropriate to instead fit a model that assumes a constant difference utility for an improvement of 5%, whether from 0% to 5%, 5% to 10%, or some other 5% difference. This is achieved by estimating a single coefficient for the attribute. See Numeric versus Categorical Attributes in Choice Models in The Data Story Guide for more information about this.
Coefficients for interactions
The fourth type of coefficient is are for interactions, which are created in choice models for:
- Interactions between attributes.
- Interactions between attributes and alternatives.
- Interactions between attributes and other variables (e.g., demographics).
This topic is beyond the scope of this article.
Histograms and standard deviations
Some choice models estimate a single parameter for each attribute level (e.g., the multinomial logit model). More advanced models, such as Hierarchical Bayes, instead estimate a distribution. When a distribution is used it appears as a histogram (the red and blue bars in the output). It is summarized using the Mean and Standard Deviation columns on the right, respectively. As shown below, there is no variation for the first level of any attribute, as it's mean is set to 0 and its variation is also set to 0 (not all software makes this same assumption).
Looking at Employer will be carbon neutral in 30 years we can see that the mean is small (0.1) and there is very little variation. In contrast, if we look at the Work Location attributes, we can see that these all have a relatively high level of variation despite their means being close to 0. This tells us that there is significant variation in the sample about the appeal of the attribute levels.
With Hierarchical Bayes models, any coefficients in the histogram that are negative are shown with red columns and positive are shown in blue columns. With latent class models (MODEL > Type of Latent class analysis), color coding is used to indicate which columns are from which class.
Additional statistics
More technical parameters are contained at the base of the table. They all have standard well-defined meanings but none are particularly informative for normal data analysis of choice models, so they are not discussed further.