Abstract

Psychological researchers may be interested in demonstrating that sets of scores are equivalent, as opposed to different. If this is true, use of equivalence analyses (equivalence and non-inferiority testing) are appropriate. However, the use of such tests has been found to be inconsistent and incorrect in other research fields (Lange and Freitag 2005). This study aimed to review the use of equivalence analyses in the psychological literature to identify issues in the selection, application, and execution of these tests. To achieve this a systematic search through several databases was conducted to identify psychological research from 1999 to the 2020 that utilized equivalence analyses. Test selection, choice of equivalence margin, equivalence margin justification and motivation, and data assessment practices for 122 studies were examined. The findings indicate wide variability in the reporting of equivalence analyses. Results suggest there is a lack of agreement amongst researchers as to what constitutes a meaningless difference. Additionally, explications of this meaninglessness (i.e., justifications of equivalence margins) are often vague, inconsistent, or inappropriate. This scoping review indicates that the proficiency of use of these statistical approaches is low in psychology. Authors should be motivated to explicate all aspects of their selected equivalence analysis and demonstrate careful consideration has been afforded to the equivalence margin specification with a clear justification. Additionally, there is also a burden of responsibility on journals and reviewers to identify sub-par reporting habits and request refinement in the communication of statistical protocols in peer-reviewed research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call