Abstract

Many machine learning models show black box characteristics and, therefore, a lack of transparency, interpretability, and trustworthiness. This strongly limits their practical application in clinical contexts. For overcoming these limitations, Explainable Artificial Intelligence (XAI) has shown promising results. The current study examined the influence of different input representations on a trained model’s accuracy, interpretability, as well as clinical relevancy using XAI methods. The gait of 27 healthy subjects and 20 subjects after total hip arthroplasty (THA) was recorded with an inertial measurement unit (IMU)-based system. Three different input representations were used for classification. Local Interpretable Model-Agnostic Explanations (LIME) was used for model interpretation. The best accuracy was achieved with automatically extracted features (mean accuracy Macc = 100%), followed by features based on simple descriptive statistics (Macc = 97.38%) and waveform data (Macc = 95.88%). Globally seen, sagittal movement of the hip, knee, and pelvis as well as transversal movement of the ankle were especially important for this specific classification task. The current work shows that the type of input representation crucially determines interpretability as well as clinical relevance. A combined approach using different forms of representations seems advantageous. The results might assist physicians and therapists finding and addressing individual pathologic gait patterns.

Highlights

  • Identification and discrimination of group differences are important aspects of biomechanical research [1,2]

  • Support Vector Machine (SVM) linear with MinMaxScaler performed best for the gait-specific performed best for theaccgait-specific data (Macc = 97.38%)

  • XAI is promising for making the decisions of machine learning models more transparent, interpretable, and, more trustworthy

Read more

Summary

Introduction

Identification and discrimination of group differences are important aspects of biomechanical research [1,2]. The progressive development of motion analysis systems based on inertial measurement units (IMUs) contributes in particular to the generation of large amounts of data, because they make valid and reliable biomechanical data accessible [5]. This provides the potential to generate new knowledge and a better understanding of human biomechanics. Many of the machine learning models show black box characteristics and a lack of transparency [10] This does not comply with the requirements of the European General Data Protection Regulation

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.