In Displayr, statistical testing is built into tables and some visualizations automatically - pulling out these insights for you. Based on the structure of your data, Displayr automatically performs the most appropriate test and shows all the significant results in your tables and visualizations. However, you can also override Displayr's default selections to set up specific tests, as desired.
This article lists some of the most frequently asked questions about statistical testing, as well as modifying the settings. For a more detailed review of the various settings, see How to Apply Significance Testing in Displayr. If you see unexpected or inconsistent statistical testing results, see How to Investigate Your Statistical Significance Testing.
Questions:
What do the blue and red fonts and arrows mean on my table or chart?
What changes with stat testing if I have overlapping samples in the columns of my table?
How is stat testing done with spans?
What values or statistics are used in stat testing?
What types of statistical testing are available on tables?
How do I interpret my stat testing results?
How does statistical testing work on banners?
How do I turn off stat testing?
Can Displayr match significance testing to other software's results?
Can I stat test across rows instead of columns?
Why did my significance test results change when I merge, add, or hide a column or category?
Can I test against the Total column?
Can I choose or specify exactly which columns to compare when using column comparisons?
Can I test against the previous time period?
What do the blue and red fonts and arrows mean on my table or chart?
Exception tests on a table use colors, arrows, or some other symbol to identify which cells on a table are significantly different from the value of respondents not in that column, or its exception. A blue color indicates that the numbers are significantly 'high', and red means that they are significantly 'low'. Arrows also indicate the statistical significance of the cells. The length of the arrow indicates the degree of statistical significance relative to the other cells in the table. For example, a longer arrows denote smaller p-values.
What changes with stat testing if I have overlapping samples in the columns of my table?
If you have overlapping samples in the columns of your table (that is, a case is counted in more than one column), by default, the overlapping cases will be excluded from the testing. You can override how overlaps are handled in testing via the object inspector > Appearance > Significance > Advanced settings Column comparisons > Overlaps, for more information see Statistical Testing with Overlapping Data. To further investigate a specific result you can select the cell(s) and right click > Statistical Test to see the details, see this article.
How is stat testing done with spans?
When using Compare Columns, statistical testing compares columns only within the same span by default. You can adjust the Significance > Advanced settings to apply testing across spans. See How to Specify Columns to be Compared in a Table for more info.
If you are using exception testing (arrows or font colors), the approach differs depending on your span setup:
-
If your spans are created from categories of nominal/ordinal variable sets or are the variable set names, the exception test divides all categories in the variable set used to make spans into the within-cell and "exception" groups. For example, if you nest Gender on top of Age, then a value like Female - Under 30 may be tested against respondents in all other columns nested under the Female and Male Gender columns.
If your span is from a Binary-Multi variable set where 0s are included in analyses, the "exception" group will be the 0s from the variable in the set (i.e., will typically test respondents across spans from that variable set). The table below shows the proportions of people who have eaten at a restaurant in the past month crossed by the gender of those who have ever eaten at a brand (0s included). You'll see that values within the same span can both be marked as significantly higher, this is because the values are being tested across spans/brands.
If your span is from a Binary-Multi variable set where 0s are excluded from analysis, only respondents in columns under the span are tested against each other. The table below shows the proportions of people who have eaten at each brand in the past month crossed by the gender of those who have ever eaten at a brand (0s excluded). You'll see that when values are significant they are the opposite direction than the other column under the span because columns within the span are tested against each other now.
To perform exception testing within a span for two Nominal variable sets, you must convert the top variable set to a Binary-Multi structure (using Filters from Selected Data automation) and fix its Values to include only the 1s in the analyses. This setup ensures that the exception test is properly constrained to the desired span. For more detailed instructions, please refer to the documentation in Banners.
What values or statistics are used in stat testing?
Testing is always performed on the %, Column %, or Average statistic (depending on the structure of the data), regardless of the statistics shown in the table.
What types of statistical testing are available on tables?
There are two types of statistical testing available in Displayr. You can adjust these from the Properties > Appearance > Significance section in the object inspector.
- Exception testing (or complement testing) - The value in a cell is tested against the value of respondents not in that column, or its exception. To apply exception tests, use the Arrows, Font colors, or Arrows and Font colors setting.
- Pairwise testing (or column comparisons) - The value in a cell is tested against the individual values in each column. Set the significance test setting to Compare Columns to run column comparisons on individual column values.
See How to Apply Significance Testing in Displayr for more information about these settings and how to make further adjustments.
How do I interpret my stat testing results?
For a detailed interpretation of your results, you can select a cell (or cells for column comparisons), right-click, and choose Statistical Test. This will provide a detailed interpretation of the data, including the use of arrows and statistical significance between the compared cells. See How to See Statistical Testing Detail using a Table.
For additional information about interpreting statistical tests on different types of tables, see the How to Read Tables and Interpret Statistical Tests section of our Help Center.
Why don't my stat testing arrows make sense with the data shown in the table? Why don't the arrows/font colors appear where I expect them to?
If some or all of the respondents overlap with other tested columns, respondents in both columns are removed from the statistical testing by default. See overlaps for more information and for how these settings can be adjusted.
Displayr only ever uses the Column % in proportions tests and the Average in numeric tests. If you are displaying statistics other than Column % or Average in the table, remember that only those statistics are used in the test, regardless of the statistics shown in the table.
See How to Investigate Your Statistical Significance Testing for other reasons and settings to help troubleshoot why your results are different from what you expect to see.
How does statistical testing work on banners?
When using Compare Columns, statistical testing compares columns only within the same span by default. You can adjust the Significance > Advanced settings in the object inspector to apply testing across spans. See How to Specify Columns to be Compared in a Table for more info.
If you are using exception testing (arrows or font colors), the approach differs depending on your span setup. See How is stat testing done with spans? for more details.
See the "Statistical Testing" section in Banners for more detailed information.
How do I turn off stat testing?
To turn off statistical tests on tables or visualizations, go to Properties > Appearance > Significance section in the object inspector and select No.
Can Displayr match significance testing to other software's results?
Most programs use slightly different statistical tests. In particular, Displayr does not default to the tests that are standard in SPSS, Quantum, and Survey Reporter, but often equivalent tests can be selected by modifying the options Object Inspector > Appearance > Significance > Advanced.
You can read more about adjusting the significance settings to match other programs in the following:
- How to Replicate SPSS Significance Tests in Displayr
- How to Replicate Quantum Significance Tests
- How to Replicate Survey Reporter Significance Tests
If you are using a different software that isn't listed above, please contact Support and provide the exact settings that are used in the other program.
Can I stat test across rows instead of columns?
Yes, you can test across rows instead of columns by using one of the following rules:
Note that for both options, you will need to have rows and columns in your table. If you don't have a variable to display in the columns, you can create a dummy variable. To create a dummy variable, use the steps in How to Create a Custom JavaScript Variable using a "1" for the code at Step 2 and setting the Structure to Nominal.
This rule does not modify the caption of the table to reflect any updated confidence levels and will instead show the default statistical assumption settings. To modify the caption manually, click into the caption and start typing.
Why did my significance test results change when I merge, add, or hide a column or category?
If you have False Discovery Rate Correction applied to your tables via Appearance > Significance > Advanced > Exception Tests, the results that were previously significant will no longer be significant (or vice versa) if a column is added or removed. It is a consequence of a multiple comparison procedure. To fix this, it can be useful to turn off multiple comparison corrections via Appearance > Significance > Advanced > Column Comparisons > Multiple comparison correction.
Otherwise, you could be using inappropriate statistical testing settings. See How to Explain Why Significance Test Results Change When Columns Are Added or Removed for more information.
Can I test against the Total column?
Yes, you can test against the Total, NET, and Average columns. If you are using arrows and font colors (exception tests), see How To Compare A Sub-Group Against The Total. If you are using column comparisons, use How to Include the Main NET Column in Column Comparisons.
Can I choose or specify exactly which columns to compare when using column comparisons?
Yes, you can select a column header in your table, and then from the object inspector, go to Appearance> Significance > Advanced > Column Comparisons. From there, you can use the Compare options to specify whether you want to test all columns, only columns within spans, or specify exactly which columns to test. See How to Specify Columns to be Compared in a Table for more information.
Can I test against the previous time period?
Yes, by default, Displayr automatically compares to the previous time period if you are using a Date/Time variable in the columns. If you do not have a Date/Time variable, see How to Test Against the Previous Period Without a Date/Time Variable for instructions on how to create a new date/time variable using JavaScript.
Next
How to Apply Significance Testing in Displayr
How To Override Displayr's Default Statistical Testing Settings
How to Investigate Your Statistical Significance Testing
UPCOMING WEBINAR: 10 Market Research Predictions Over the Next 4 Years