Abstract

This brief paper introduces a new approach to assessing the trustworthiness of research comparisons when expressed numerically. The ‘number needed to disturb’ a research finding would be the number of counterfactual values that can be added to the smallest arm of any comparison before the difference or ‘effect’ size disappears, minus the number of cases missing key values. This way of presenting the security of findings has several advantages over the use of significance tests, effect sizes and confidence intervals. It is not predicated on random sampling, full response or any specific distribution of data. It bundles together the sample size, magnitude of the finding and the level of attrition in a way that is standardised and therefore comparable between studies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call