When doing statistical significance testing, you may run into the multiple comparisons (post hoc testing) problem, whereby the more tests that are conducted, the more false discoveries (false positives) that are made.
Multiple Comparison Correction (MCC) attempts to fix the false discovery problem by requiring results to have smaller p-Values in order to be classified as significant. See the Technical Notes section below for more examples of how MCC is affected by the structure of the table.
This is more likely to arise in situations where there are many comparisons (tests) run in a single table, such as tracking surveys where the number of wave columns in a table increases each round and when creating crosstabs with banners with lots of columns, versus individual variable sets with fewer columns. If the significance testing results in a table change when adding new columns or waves, or are inconsistent across tables using some of the same variable sets, examine the multiple comparison correction approach you're using and determine if it's appropriate for your analysis.
This article describes how to go from a table showing significant differences at a 95% confidence level:
To a table showing significant differences at a 95% confidence level AND with false discovery rate correction applied:
Requirements
- Familiarity with Displayr's significance testing settings. See: How to Apply Significance Testing in Displayr
- A document containing a table or visualization showing significant differences.
- An understanding of the concept of multiple comparison problem, see our Multiple Comparison Problem (Post Hoc Testing) article for a detailed explanation.
Method
- Select the table or visualization in your document.
- From the object inspector, select Appearance > Significance > Advanced
- Set the Multiple comparison correction based on what type of test you are showing:
- Arrows, Font colors, or Arrows and Font colors use Exception Tests. On the Exception Tests tab, select Multiple comparison correction > False Discovery Rate (FDR) or None.
-
Compare columns uses Column Comparisons, sometimes also called pairwise comparisons. On the Column Comparisons tab, select one of the algorithms in the drop down for Multiple comparison correction.
- [OPTIONAL] If you only want the correction to run based on the number of cells within a span, check Within row and span. Otherwise, the correction will be made based on the number of cells in the entire table. This is most relevant when doing significance testing on BANNERS, where there are multiple groups of columns.
- Select Apply to Selection to apply it to just this table or Set as Default to make it the default for all tables in the document. To learn more about defaults, see: How to Set the Default Type of Significance Test.
The results are as follows:
Restore to Document Default Settings
To revert to the document's current default settings:
- Click the Restore button
Note: The Restore button will only be enabled when settings for the selected item(s) are different from the document's current default settings (settings saved by Set as Default). The Restore button will be disabled if you did not save any settings for the selected item(s). See: How to Set the Default Type of Significance Test to learn more about changing the default settings.
Technical Notes
The extent to which the MCC makes it harder for a result to become significant is mostly due to the number of tests (cells) shown on the table or within the span if Within row and span is checked in the Advanced Statistical Testing Assumptions for the specific test type shown on the output. This is to say, you can have different MCC settings for Exception Tests versus Column Comparisons.
The correction that MCC uses may take other data into account when figuring out how much to raise the threshold for something to be significant, such as the range of values across the table. In most cases, when you increase the number of columns or rows on a table, the MCC will make it harder for a result to become significant as compared to the same table with fewer cells.
For example, you'll see the two tables below are showing the same two variable sets, but with some of the rows hidden on the one beneath. Because there are fewer cells in the table, the MCC adjustment is smaller on the bottom table vs the top and thus the bottom table shows more significant results even though the percentages on the table are exactly the same:
Another example of when you may notice differences is when you are working with a tracking study using column comparisons. When you get additional waves of data, you may see historical significance results change as more columns are added to the table. The below table actually has some historical results appear after adding more columns (usually fewer significant results are shown):
Next
How to Apply Significance Testing in Displayr
How to Set the Default Type of Significance Test
How to Compare Significant Differences Between Columns
How to Conduct Significance Tests by Comparing to Previous Time Periods
How to Change the Confidence Level at which Significant Differences are Displayed