fbpx

5 tips when your sample is smaller than you intended and your results are mostly not significant

Sometimes, despite all your efforts, the sample size of your quantitative study turns out to be smaller than you planned. Maybe you received a low response rate to your online survey, your database search for patient records returned many unusable or missing records, or you found yourself in another scenario with a similar outcome.

And often in these cases, your results turn out to be nonsignificant.

What’s the problem with a smaller sample?

Aside from achieving less precision in your sample estimates and possible bias in your sample, one of the biggest problems of small sample size is that your study loses power. Power, in the statistical sense, is the probability that your statistical tests will find that true differences or true relationships are significant. When your sample size is reduced, power drops and the probability of correctly rejecting your null hypotheses is reduced.

If you are in this type of compromised situation where you are not finding significance in your hypothesis tests, it may be comforting to reflect on the illustrative article by Altman, & Bland (1995, p.485) that Absence of evidence is not evidence of absence”. In other words, just because you have found no evidence of significant differences or relationships in your study does not mean that there are genuinely no differences or relationships in the phenomenon under investigation. There may well be truly significant effects, but your study may lack the power to detect them.

So, what should you do in this situation?

Here are some steps to consider:

First, remember the obvious – the results of a good study should make sense. Even if your results are not significant, look for consistency in the results. For example, look to see the extent to which your group with a particular characteristic or treatment follows a trend, or is consistently higher (or lower) on your measurements than the comparison group(s). Or assess the extent to which the relationships you observe are consistently positive (or negative).

Second, realise that significance is tied to sample size. So, use a measure like the effect size that is that is independent of sample size. The effect size tells you the practical significance, or meaningfulness, of your differences or relationships. Then use Cohen’s criteria, or similar, to evaluate these effects as weak, moderate or strong.

Third, if possible, look to see whether your observations are consistent with findings in reputable, published literature. For example, look at your means, mean differences, correlations, frequencies or proportions to see whether your results are reasonable or feasible.

Fourth, do a backward or post hoc calculation, using G*Power or equivalent, to see what sample size you would have needed to achieve significance based on your observed mean differences or correlations. Your statistics program may also provide the power of your tests.

Finally, construct a careful argument that integrates all these factors in the context of your study as an explanation of your results. Your reader will appreciate your insight, honesty, and transparency.

Contact me at [email protected] if you need help with the analysis of your data.