Abstract

ABSTRACT The typical process for assessing inter-rater reliability is facilitated by training raters within a research team. Lacking is an understanding if inter-rater reliability scores between research teams demonstrate adequate reliability. This study examined inter-rater reliability between 16 researchers who assessed fundamental motor skills using the Test of Gross Motor Development-3rd edition. Total score agreement (ICC = 0.363) and locomotor subscale agreement (ICC = 0.383) were “very poor,” while ball skills subscale agreement (ICC = 0.478) was “poor.” Consistencies of total (ICC = 0.757), locomotor (ICC = 0.730), and ball skills (ICC = 0.746) scores were “fair.” Component percentage agreement ranged from 40.5% to 96.2%. These data suggest that there are significant differences in how different research groups evaluate fundamental motor skills based on the subjective nature of scoring. Consistency and agreement among users need to be addressed in motor development research to allow for direct comparisons across studies that use process-oriented measures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call