Abstract
The aim of this paper is to provide a tutorial on reliability in research and clinical practice. Reliability is defined as the quality of a measure that produces reproducible scores on repeat administrations of a test. Reliability is thus a prerequisite for test validity. All measurements are attended by measurement error. Systematic bias is a non-random change between trials in a test retest situation. Random error is the ‘noise’ in the measurement or test. Systematic bias should be evaluated separately from estimates of random error. For variables measured on an interval-ratio scale the most appropriate estimates of random error are the typical error, the percent coefficient of variation, and the 95% limits of agreement. These can be derived via analysis of variance procedures. Estimates of relative, rather than absolute, reliability may be obtained from the intraclass correlation coefficient. For variables that have categories as values the kappa coefficient is recommended. Irrespective of the statistic chosen, 95% confidence intervals should be reported to define the range of values within which the true population value is likely to reside. Small random error implies greater precision for single trials. More precise tests and measurements facilitate more sensitive monitoring of the effects of treatment interventions in research or practice settings.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.