Abstract

Abstract Aim: This study aimed (i) to test the inter-rater reliability of swimming teachers, (ii) to test the swimming teacher's discussion effect on inter-rater reliability and (iii) to verify the intra-rater swimming teacher's reliability. Method: Twenty-one learning swimmers (14.1 ± 5.1 years old) performed two 25-m front crawl courses at a comfortable speed without breathing between the sixth and 20-m, and had their displacements captured on film. Three swimming teachers with different academic backgrounds and skills evaluated the swimmer right upper limb using a 20-items checklist. In the 1st-step, teachers assessed 20-items and in 2nd-step discussed their particular evaluating criteria - selecting five items considered as the most relevant. The inter- and intra-rater reliability were tested using the Fleiss Kappa Coefficient. Results: In the 1st-step substantial reliability was found for item three and in movement descriptor for items three and 20. Nearly perfect reliability was found in the movement descriptor for item 13. In 2nd-step, moderate reliability was found only in the movement descriptor for item 20. Only the most experienced evaluator showed substantial intra-rater reliability for items four and 10 and moderate for item 20. Conclusion: The proposed discussion method did not cause the expected effect on inter-rater reliability. The swimming teacher with a higher degree and swimming skills showed better intra-rater reliability. Some items and movement descriptors proposed at the 20-items checklist can be used in practical settings.

Highlights

  • At any swimming level learning is affected by the interaction of several components, including teacher/coach action[1,2,3]

  • We recognize the importance of the results reported in the literature, but some checklists emphasize only movement errors, suggesting that there is only one rigid way to swim[7]

  • Perfect reliability was found in MD1 of item 13; and MD3 of item 20

Read more

Summary

Introduction

At any swimming level learning is affected by the interaction of several components, including teacher/coach action[1,2,3]. The inter and intra-rater reliability tests are common approaches that enable the satisfactory use of a checklist by different teachers and for the same teacher over time[6,10,11] This reliability refers to the condition of reproducing similar measures on different occasions[11,12], being an important procedure in ensuring that the results do not have significant differences[6,8,10]. The interrater experiences should not influence the results, the literature indicates that more experienced evaluators have better competences[8,10] This is important because from this assessment that learning exercises will be proposed[5,10,11]

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.