Whenever a statistical test concludes that a relationship is significant, when, in reality, there is no relationship, a false discovery has been made. When multiple tests are conducted this leads to a problem known as the multiple testing problem (also known as the multiple comparisons problem, or the post hoc testing problem, data dredging, and sometimes, data mining), whereby the more tests that are conducted, the more false discoveries that are made.
The multiple comparison problem is more likely to arise in situations where there are many comparisons (tests) run in a single table, such as tracking surveys with tables that include many individual waves of data. If the significance testing results in a table change after adding a new wave of data, or the results in the table differ from what you expect, examine the multiple comparison correction approach you use and determine if it's appropriate for your analysis.
Multiple comparison corrections attempt to fix this problem. The basic way that they work is that they require results to have smaller p-Values in order to be classified as significant.
- A document containing a table showing significant differences.
To apply multiple comparison correction to Column Comparisons:
1. Select the table in your document.
2. From the Object Inspector, select Properties > Significance > Advanced
3. Select the Significance levels tab.
4. In the Show significance drop-down box, select Compare columns.
5. In the Object Inspector, select Properties > Significance > Advanced
6. Select the Column Comparisons tab
7. From the Multiple comparison correction menu, select the type of multiple comparison correction you want to use:
In this example, we will select the False Discovery Rate (FDR) correction.
8. Select Apply to Selection to apply it to just this table or Apply as Default to make it the default for all crosstabs with date variables in the document.
The results are as follows: