Abstract

Clinical vestibular technology is rapidly evolving to improve objective assessments of vestibular function. Understanding the reliability and expected score ranges of emerging clinical vestibular tools is important to gauge how these tools should be used as clinical endpoints. The objective of this study was to evaluate inter-rater and test-retest reliability intraclass correlation coefficients (ICCs) of four vestibular tools and to determine expected ranges of scores through smallest real difference (SRD) measures. Sixty healthy graduate students completed two 1-hour sessions, at most a week apart, consisting of two video head-impulse tests (vHIT), computerized dynamic visual acuity (cDVA) tests, and a smartphone-assisted bucket test (SA-SVV). Thirty students were tested by different testers at each session (inter-rater) and 30 by the same tester (test-retest). ICCs and SRDs were calculated for both conditions. Most measures fell within the moderate ICC range (0.50-0.75). ICCs were higher for cDVA in the inter-rater subgroup and higher for vHITs in the test-retest subgroup. Measures from the four tools evaluated were moderately reliable. There may be a tester effect on reliabilities, specifically vHITs. Further research should repeat these analyses in a patient population and explore methodological differences between vHIT systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call