While analyzing your data for insights, it's one thing for a value to be higher than another, but it's a much more powerful statement if a value is significantly higher/lower than another. In Displayr, statistical testing is built into summary tables and crosstabs systematically - pulling out these insights for you. Based on your data and testing assumptions, Displayr automatically chooses the best test and shows all the significant results on your table. Testing is always performed on the %, Column %, or Average statistic (depending on the structure of the data), regardless of the statistics shown in the table.
This article describes how to go from a standard table in Displayr showing no significant differences:
To a table that shows cell comparison significant differences:
- A document containing a standard built-in Displayr table.
- Understanding of the two types of testing done in Displayr, see our Introduction to Significance Testing article in the.datastory.guide. There are detailed examples of how exception testing and column comparison testing works, as well.
1. Determine whether you what way you want to run testing, select your table(s) and choose the appropriate way in the object inspector > Properties > Appearance > Significance
Ways using Arrows and Font colors
Run Exception Testing (Complement testing)
Run column comparisons on individual column values, also known as pairwise testing
|The value in a cell is tested against the value of respondents not in that column, or its exception.||The value in a cell is tested against the individual values in each column.|
2. [OPTIONAL]: Select the Advanced option if you want to review or modify the formatting or assumptions for the significance testing, see Advanced Statistical Testing Assumptions section below for more detail.
3. [OPTIONAL]: If you want to use this testing as the default testing across tables using the default testing, see How to Set the Default Type of Significance Test. Otherwise, these settings will only apply to this and any other tables you have selected before changing the Significance dropdown.
Advanced Statistical Testing Assumptions
- Select the table(s) you want to change significance testing assumptions on (you only need select one if you want to change the default settings on all tables using default testing).
- In the object inspector, select Properties > Appearance > Significance > Advanced.
You will see the Advanced Statistical Testing Assumptions dialog:
- Select from one of the four tabs on the top depending on the type of task you want to perform:
- Significance Levels:
- Show significance: show higher or lower significance with arrows, font colors (tables only), or using symbols to show significant differences between columns
- Overall significance level: is used when determining which results to show as being statistically significant
- Minimal sample size for testing: where cells have sample sizes of less no significance test is conducted when conducting automated tests of statistical significance between cells (i.e., Cell Comparisons and Column Comparisons).
- Extra deff: by default, the deff value (design effect) is set to 1.00
- Significance levels and appearance: symbols used to denote different levels of statistical significance.
- Test Type:
- Proportions: non-parametric tests will be done on categorical data
- Means: t-test will be done on numeric data and corrected with Bessel’s correction
- Correlations: default is Pearson
- Equal variance in tests when sample size is less than: if the sample size is less than 10 variance is assumed equal.
- Exceptions Test:
- Multiple comparison correction: False Discovery Rate is by default applied to help reduce the number of false positives based on the entire table. A check box is available if instead, you prefer to apply the correction within each span within each row.
- Significance symbol: choose the symbol you want to show significance, either Arrow or Triangle. The default is Arrow.
- Column Comparisons:
- Multiple Comparison correction: the following corrections are available for post hoc testing: Fisher LSD, Duncan, Newman Keuls (S-N-K), Tukey HSD, False Discovery Rate (FDR), False Discovery Rate (pooled t-test), Bonferroni, Bonferroni (pooled t-test), Dunnett.
- Overlaps: The default is to ignore the sample that overlaps between columns when respondents in columns are not mutually exclusive
- No test symbol: - is shown if a test isn’t performed due to settings
- Symbol for non-significant test: nothing is shown if a test comes back insignificant
- ANOVA-Type Test: Select whether ANOVA-type tests are run as part of the testing
- Show redundant tests: Select whether to show significance on one cell (the one with the higher value) or all cells involved in testing.
- Show as groups: Show letters for insignificant columns rather than significant
- Recycle column letters: Each span begins labeling columns at A
- Significance Levels:
Restore to Document settings
To revert to document settings:
- Click the Restore button
Note: The Restore button will only be enabled when settings for the selected item(s) are different from the document settings (settings saved by Apply as Default). The Restore button will be disabled if you did not save any settings for the selected item(s).
See Detail from Statistical Testing
If you'd like more information on how these settings are used in the testing on your table, you can investigate the specific statistical test, see intermediary calculations, and get a description of what is being tested by using the Statistical Test (alpha) feature, see How to See Statistical Testing Detail using a Table.