Abstract

Understanding reliability of performance analysis tools is important to ensure match to match comparisons can be undertaken with the knowledge of consistency between coding situations. There are few published studies examining the reliability of commonly used performance analysis tools. The aim of this project was to assess the inter- and intra-rater reliability of the NetballStats application and to make comparisons between live and video-based coding situations. Two ‘coders’ coded eight netball matches using the NetballStats application, coding each match live, then twice from video. Level of agreement was assessed for frequency counts across the variables coded. Results showed that intra-rater agreement was higher than inter-rater agreement and that reliability from video coding is better than from live coding. High frequency events automatically coded by the application and events that are well defined had greater levels of agreement than lower frequency events and subjectively judged events. Live coding situations underrepresent occurrence of events, particularly for high frequency events such as ‘possession’. To ensure reliability between coders, clubs should provide an extensive training program to coders with clear instructions on coding subjective events. Coaches should be aware that live coding underestimates some event types and factor this into their decision making processes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call