Abstract

AbstractMachine learning (ML) has been used in human gait data for appropriate assistive device prediction. However, their uptake in the medical setup still remains low due to their black box nature which restricts clinicians from understanding how they operate. This has led to the exploration of explainable ML. Studies have recommended local interpretable model‐agnostic explanation (LIME) because it builds sparse linear models around an individual prediction in its local vicinity hence fast and also because it could be used on any ML model. LIME is however, is not always stable. The research aimed to enhance LIME to attain stability by avoid the sampling step through combining Gaussian mixture model (GMM) sampling. To test performance of the GMM‐LIME, supervised ML were recommended because study revealed that their accuracy was above 90% when used on human gait. Neural networks were adopted for GaitRec dataset and Random Forest (RF) was adopted and applied on HugaDB datasets. Maximum accuracies attained were multilayer perceptron (95%) and RF (99%). Graphical results on stability and Jaccard similarity scores were presented for both original LIME and GMM‐LIME. Unlike original LIME, GMM‐LIME produced not only more accurate and reliable but also consistently stable explanations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.