This article covers a general framework for investigating unexpected statistical significance results and adjusting your document's Advanced Statistical Testing Assumptions accordingly. If you're setting up your statistical testing assumptions for the first time, see How to Apply Significance Testing in Displayr first. This article does not troubleshoot custom significance testing or testing run using Rules in Displayr.
1. Gather the details about the issue
2. Find what is causing the issue
3. Figure out a workaround or a solution
Requirements
Please note this requires the Data Stories module or a Displayr license.
Please note that some of the steps below require a Displayr license, but most steps can be performed using the Data Stories module.
- A table with built-in significance testing applied.
- Familiarity with How to Apply Significance Testing in Displayr and How to See Statistical Testing Detail using a Table (using the alpha button).
- Understanding of the two types of testing done in Displayr, see our Introduction to Significance Testing article in the Data Story Guide. There are also detailed examples of how exception testing and column comparison testing work.
1. Gather the details about the issue
You will use these details and information to help figure out what is causing the issue.
-
- Why do you consider the results unexpected? Are results unexpected across the board or with a particular result?
- Review the Advanced Statistical Testing Assumptions of the table.
- Use the alpha button to review the stat testing for a particular cell or cells (if comparing columns instead of using arrows and fonts). Note it's easiest to Duplicate the table and remove any Rules before trying this.
- Do any messages pop up or show in the results when you do this?
- Is any of the details about the statistical testing unexpected?
- Are there any rules applied to the table that change the statistics shown? Keep in mind that statistical testing is done before Rules are applied to the table.
- What type of data is shown in the columns? Can the same respondent be in more than one column? Are there hidden columns? Are there column spans?
- What statistic is shown in the cells of the table? Remember that Displayr only ever uses the Column % in proportions tests and the Average in numeric tests.
- What documentation and help information is available about the tool(s) or issue? You can search the Displayr Help Center or our sister software Q's wiki for more technical information on statistical tests and examples to explain testing.
2. Find what is causing the issue
Once you figure out what is causing the issue, then you can go to step 3.
-
If a column isn't getting tested:
- If it's a Total or NET column, confirm all steps are followed in How to Include the Main NET Column in Column Comparisons.
- Does some/all of its respondents overlap with other tested columns? By default, respondents in both columns are removed from the statistical testing; see the technical detail for Overlaps on our wiki.
- Given the Overlap setting, does the column meet the minimum sample size?
- If using exception tests, does the column have a NOT or opposing category to compare to? If the variable set used in the columns has one category and missing data is excluded (so there is not a 0 or not selected category, or hidden category to compare to), a statistical test cannot be conducted because there isn't any other category to compare to.
- Similarly, if each column is mutually exclusive only with missing data that is excluded, no one respondent has a non-missing value across the columns to compare to. As a result, a statistical test cannot be conducted because the columns being tested do not have an opposing group to test against. A clue that this might be an issue is if the percentages in the column are all 100%. This typically is seen more with complex BANNERs.
- If you are testing across mutually exclusive categories in a banner, be sure they are a part of the same variable set and are not set up as individual binary variables.
-
If testing results change across tables or waves:
- Confirm the Testing Assumptions are the same for both tables. Note that a table could use the default or modified settings for that particular table, so you need to look at each individually.
- If Multiple Comparison Correction is turned on, please review How to Apply Multiple Comparison Correction to Statistical Significance Testing to ensure the setting is configured as expected.
- Look for any Data > Rules applied to the table that affect significance testing. Note that accessing rules requires a Displayr license.
- Confirm your Advanced Statistical Testing Assumptions are what you want, given the documentation and your analytical goals. Some testing scenarios have multiple settings that need to be adjusted. Ensure inputs and settings are set up properly based on our documentation in the Help Center on how to do this.
-
Test individual Advanced Statistical Testing Assumptions to see what triggers the issue.
- If there are obvious settings or aspects to test, try changing/confirming those one at a time to see if one fixes or triggers the issue. Check the details using the alpha button for a particular result as you go along.
- If nothing is found from the step above, re-create the issue from scratch by Restoring the default settings.
- Slowly change the settings one at a time to confirm they are affecting results as expected, using the alpha button for a particular result as you go along. Note which setting, in particular, triggers the unexpected result.
- If re-creating from scratch begins with unexpected results by default, repeat steps 2.2 and 2.3 but use simpler data. For example, using a similar type of data from our Colas.sav data set).
- If using simpler data still yields an unexpected result skip to 4. Contact Support to clarify how the statistical testing assumptions work.
3. Figure out a workaround or a solution
If the steps above don't lead to an obvious solution, use what you've learned to work toward a solution.
- Is the setting that caused the issue required, or can you work without it? You can replicate testing similar to other programs like How to Replicate Quantum Significance Tests.
- Should you adjust the statistical assumptions for this particular output? You can reduce the minimum sample size or adjust any other significance settings on an individual table level instead of applying them as the default testing settings.
-
Can you get what you need using multiple tables or by structuring the data differently?
See How to Create a Table of Differences and How to Test Against the Previous Period without a Date/Time Variable for some ideas. - For repeated measures testing or testing across brands and attributes, will stacking the data allow you to perform the testing you'd like? For example, stacking will allow you to compare respondents' brand answers as if each brand response was a unique respondent.
- Can you get something close but adequate to what you ultimately need with a different process? For example, if using a large, complex table as part of the significance testing, will breaking the table up into smaller parts allow you to run similar tests?
- Can your desired testing be done using a RULE? Note these rules are very rudimentary compared to our built-in testing and are more limited in application. You can use two popular rules: How to Apply Independent Samples Column Means and Proportions Tests to a Table and How to Display Row Comparisons in a Table. Applying, modifying, and removing rules requires a Displayr license.
4. Contact Support
If you've worked through these steps and haven't found a solution or need help, please contact our support team at support@displayr.com or by clicking on Help > Contact support from within your specific document in Displayr. Please provide the details of what you learned in the steps above.