Abstract

The Balance Error Scoring System (BESS) is a human-scored, field-based balance test used in cases of suspected concussion. Recently developed instrumented alternatives to human scoring carry substantial advantages over traditional testing, but thus far report relatively abstract outcomes which may not be useful to clinicians or coaches. In contrast, the Automated Assessment of Postural Stability (AAPS) is a computerized system that tabulates error events in accordance with the original description of the BESS. This study compared AAPS and human-based BESS scores. Twenty-five healthy adults performed the modified BESS. Tests were scored twice each by human raters (3) and the computerized system. Interrater (between-human) and inter-method (AAPS vs. human) agreement (ICC(2,1)) were calculated alongside Bland-Altman limits of agreement (LOA). Interrater analyses were significant (p<0.005) and demonstrated good to excellent agreement. Inter-method agreement analyses were significant (p<0.005), with agreement ranging from poor to excellent. Computerized scores were equivalent across rating occasions. LOA ranges for AAPS vs. the Human Average exceeded the average LOA ranges between human raters. Coaches and clinicians may consider a system such as AAPS to automate balance testing while maintaining the familiarity of human-based scoring, although scores should not yet be considered interchangeable with those of a human rater.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.