Abstract

Soccer coaches and scouts typically assess in-game soccer performance to predict players’ future performance. However, there is hardly any research on the reliability and predictive validity of coaches’ and scouts’ performance assessments, or on strategies they can use to optimize their predictions. In the current study, we examined whether robust principles from psychological research on selection – namely structured information collection and mechanical combination of predictor information through a decision-rule – improve soccer coaches’ and scouts’ performance assessments. A total of n = 96 soccer coaches and scouts participated in an elaborate within-subjects experiment. Participants watched soccer players’ performance on video, rated their performance in both a structured and unstructured manner, and combined their ratings in a holistic and mechanical way. We examined the inter-rater reliability of the ratings and assessed the predictive validity by relating the ratings to players’ future market values. Contrary to our expectations, we did not find that ratings based on structured assessment paired with mechanical combination of the ratings showed higher inter-rater reliability and predictive validity. In contrast, unstructured-holistic ratings yielded the highest reliability and predictive validity, although differences were marginal. Overall, reliability was poor and predictive validities small-to-moderate, regardless of the approach used to rate players’ performance. The findings provide insights into the difficulty of predicting future performance in soccer.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call