Sometimes a result on a table is different from what you were expecting. A process for tracking down the cause of the results is:
- Check the number of cases in the data set
- Review the sample size of the table or visualization
- Review the sample size for each cell in the table
- Check filters and weights
- Check the structure of the variable sets used to construct the table
- Review the raw data
- Review value attributes
- Review the inputs and other settings in the object inspector
- Follow links back to their source
- Review Rules
- Review the statistical testing assumptions
- Search Help
- Contact us
Note that some of the methods detailed below require a Displayr license, while others can be done using the Data Stories module or a Displayr license.
Check the number of cases in the data set
A common cause of results that look wrong is that the data file contains too few or too many cases. This is checked by clicking on the data set in the Data Sources tree, and reviewing the Number of cases shown in the object inspector.
Review the sample size of the table or visualization
Data issues are often discovered by looking at the sample size of a table or visualization. (If you can't see the sample size, change the visualization back into a table first.) For example, the visualization below has a sample size of 895 (highlighted in yellow at the bottom).
Sometimes tables or visualizations will show a range of sample sizes, indicating that the sample size is different for different results. For example, the table below shows sample sizes varying from 758 to 873. This is caused by different cells in the table having different sample sizes. This is discussed in more detail in the next section.
Review the sample size for each cell in the table or visualization
If you have a visualization, rather than a table, you need first change it back to a table by pressing the Table button in the object inspector.
The table above is showing row percentages. We can see the sample size for each row by selecting the table, and selecting Data > Statistics > Right and Row sample size from the object inspector. The resulting table is shown below. The last column shows us that the smallest sample size is for Cancel your subscription, and it is 758.
Note that in the footer it says 69 are missing. This is calculated as the total sample size (600) less the sample size for any of the rows (531).
In grid tables, we would instead use Statistics > Cells > Sample size, as the result can differ cell-to-cell.
Special note should be made of the sample size for any NET or SUMs. These are always calculated based on people who have no missing data for any of the other cells in the row or column of the table. For example, the footer in the table below tells us that the sample size varies from 828 to 869, however, the small sample of 828 is only for the NET, and each of the other categories has, on its own, a substantially larger sample size.
Check filters and weights
You can see if a weight or filter has been applied in the footer (if there is a footer), or, by selecting the output and viewing the filters and weights in the object inspector. In this example, no weight has been applied, but the table has been filtered based on males. If a visualization has been created by a separate table as an input, you need to review the weights and filters of that table, rather than the visualization.
Please see How to Investigate Filters that Show Incorrect or Odd Results and How to Troubleshoot Weights for more instructions on how to figure out what is wrong with your filter/weight variable.
Check the structure of the variable sets used to construct the table
When Displayr imports data it automatically groups variables into variable sets. Sometimes it groups the variables or chooses a structure that may be different from what you expect. See Variable Sets and Manipulating Data for more information.
Review the raw data
Select a variable set in the Data Sources tree, right-click, and select View in Data Editor to review the raw data. If you select Values, you will see any recoding of the data denoted using an arrow from the source value (4 below) to the recoded value (100 below).
Review value attributes
You can see how data has been recoded by selecting any variables and pressing the Values button in the object inspector. Please see Variable Sets on how to adjust and confirm value attribute settings.
Additional insight can be obtained by using the Reset and Split from the right-click menu to see what categories are used in NETs and merged rows/columns on the table.
Review the inputs and other settings in the object inspector
Anything that is calculated in Displayr - variables, tables, calculations, visualizations - can be clicked on, allowing you to see their data inputs. For example, below we can see that the table is a Summary of Customer effort.
Additionally, the result may be caused by other settings in the object inspector, so have a hunt around. For more complex tables, you may also want to review their dependency graph to confirm all of the correct upstream data is being used.
Follow links back to their source
When you hover over an input a tooltip will appear in the object inspector. For example, the tooltip obtained when hovering your mouse pointer over Customer effort is shown above. You can click on the arrow in the tooltip, and Displayr will then select this input. This allows you to trace any calculation back to its source.
Review Rules
A Rule is a bit like conditional formatting in Excel, except that it can also change the data and be applied to visualizations. As an example, the table below replaces small values with * symbols. You can verify if a rule has been applied to a table by selecting it, and going to Data (or Appearance) > Rules.
For more detail on how Rules can be used in Displayr and other caveats, please see How to Use Rules in Displayr.
Review the statistical testing assumptions
The statistical testing options are set in object inspector > Appearance > Significance.
To see more detail on how these settings are applied please see, How to Apply Significance Testing in Displayr, and to work through what might be causing incorrect stat testing results, please see How to Investigate Your Statistical Significance Testing.
Search Help
The search feature in Displayr Help (which you are reading at the moment) can be used to search through our documentation. There you may find articles addressing specific issues, such as Why don't my percentages add up to 100% and How to Set Value Attributes for a Binary-Multi and Binary-Grid.
Contact us
If the result looks wrong, and you can't troubleshoot it yourself, please contact support@displayr.com, and give us as much information as you can. Please see the last section in What To Do When Displayr's Results Are Different Than Another Program's Results for tips on how to give us the information that we need to track such things down.