Sometimes a result on a table is different from what you were expecting. A process for tracking down the cause of the results is:

- Check the number of cases in the data set
- Review the sample size of the table or visualization
- Review the sample size for each cell in the table
- Check filters and weights
- Check the structure of the variable sets used to construct the table
- Review the raw data
- Review value attributes
- Review the inputs and other settings in the object inspector
- Follow links back to their source
- Review Rules
- Review the statistical testing assumptions
- Search Help
- Contact us

## Check the number of cases in the data set

A common cause of results that look wrong is that the data file contains too few or too many cases. This is checked by clicking on the data set in the **Data sets **tree, and reviewing the **Number of cases **shown in the object inspector.

## Review the sample size of the table or visualization

Data issues are often discovered by looking at the sample size of a table or visualization. (If you can't see the sample size, change the visualization back into a table first.) For example, the visualization below has a sample size of 895 (highlighted in yellow at the bottom).

Sometimes tables or visualizations will show a range of sample sizes, indicating that the sample size is different for different results. For example, the table below shows sample sizes varying from 758 to 873. This is caused by different cells in the table having different sample sizes. This is discussed in more detail in the next section.

## Review the sample size for each cell in the table or visualization

If you have a visualization, rather than a table, you need first change it back to a table by pressing the **Table **button in the object inspector.

The table above is showing row percentages. We can see the sample size for each row by selecting the table, and selecting **object inspector > STATISTICS > Right** and **Row sample size**. The resulting table is shown below. The last column shows us that the smallest sample size is for *Cancel your subscription, *and it is 758.

Note that in the footer it says 137 are missing. This is calculated as the total sample size (895) less the sample size for any of the rows (758).

In grid tables, we would instead use **STATISTICS > Cells > Sample size**, as the result can differ cell-to-cell.

Special note should be made of the sample size for any NET or SUMs. These are always calculated based on people who have no missing data for any of the other cells in the row or column of the table. For example, the footer in the table below tells us that the sample size varies from 28 to 94, however, the small sample of 28 is only for the NET, and each of the other categories has, on its own, a substantially larger sample size.

## Check filters and weights

You can see if a weight or filter has been applied in the footer (if there is a footer), or, by selecting if the output and viewing the filters and weights in the object inspector. In this example, no weight has been applied, but the table has been filtered based on males. If a visualization has been created with a table as an input, you need to review the weights and filters of that table, rather than the visualization.

## Check the structure of the variable sets used to construct the table

When Displayr imports data it automatically groups variables into variable sets. Sometimes it groups the variables or chooses a structure that may be different from what you expect. See Variable Sets and Manipulating Data for more information.

## Review the raw data

Select data in the **Data sets **tree, right-click, and select **View in Data Editor **to review the raw data. If you select **Values, **you will see any recoding of the data.

## Review value attributes

You can see how data has been recoded by selecting any variables and pressing the **Values **button in the object inspector. Additional insight can be obtained by using the **Reset split button** in the toolbar.

## Review the inputs and other settings in the object inspector

Anything that is calculated in Displayr - variables, tables, calculations, visualizations - can be clicked on, allowing you to see their data inputs. For example, below we can see that the table is a **SUMMARY **of *Customer effort.*

Additionally, the result may be caused by other settings in the object inspector, so have a hunt around.

## Follow links back to their source

When you hover over an input a tooltip will appear in the object inspector. For example, the tooltip obtained when hovering your mouse pointer over **Customer effort **is shown above. You can click on the blue circle containing the arrow in the tooltip, and Displayr will then select this input. This allows you to trace any calculation back to its source.

## Review Rules

A *Rule *is a bit like conditional formatting in Excel, except that it can also change the data and be applied to visualizations. As an example, the table below replaces small values with * symbols. You can verify if a rule has been applied to a table by selecting it, and going to **Properties > Rules**.

## Review the statistical testing assumptions

The statistical testing options are set in object inspector **> PROPERTIES > Significance **(see the image above).

## Search Help

The search feature in Displayr Help (which you are reading at the moment) can be used to search through our documentation.

## Contact us

If the result looks wrong, and you can't troubleshoot it yourself, please contact support@displayr.com, and give us as much information as you can. Please see the last section in What To Do When Displayr's Result is Different To Another Program's for tips on how to give us the information that we need to track such things down.

## Comments

0 comments

Please sign in to leave a comment.