Abstract

BackgroundMyoelectric pattern recognition systems can decode movement intention to drive upper-limb prostheses. Despite recent advances in academic research, the commercial adoption of such systems remains low. This limitation is mainly due to the lack of classification robustness and a simultaneous requirement for a large number of electromyogram (EMG) electrodes. We propose to address these two issues by using a multi-modal approach which combines surface electromyography (sEMG) with inertial measurements (IMs) and an appropriate training data collection paradigm. We demonstrate that this can significantly improve classification performance as compared to conventional techniques exclusively based on sEMG signals.MethodsWe collected and analyzed a large dataset comprising recordings with 20 able-bodied and two amputee participants executing 40 movements. Additionally, we conducted a novel real-time prosthetic hand control experiment with 11 able-bodied subjects and an amputee by using a state-of-the-art commercial prosthetic hand. A systematic performance comparison was carried out to investigate the potential benefit of incorporating IMs in prosthetic hand control.ResultsThe inclusion of IM data improved performance significantly, by increasing classification accuracy (CA) in the offline analysis and improving completion rates (CRs) in the real-time experiment. Our findings were consistent across able-bodied and amputee subjects. Integrating the sEMG electrodes and IM sensors within a single sensor package enabled us to achieve high-level performance by using on average 4-6 sensors.ConclusionsThe results from our experiments suggest that IMs can form an excellent complimentary source signal for upper-limb myoelectric prostheses. We trust that multi-modal control solutions have the potential of improving the usability of upper-extremity prostheses in real-life applications.

Highlights

  • Myoelectric pattern recognition systems can decode movement intention to drive upper-limb prostheses

  • Offline analysis Our first aim was to assess the predictive performance of the different modalities explored in this study, that is, the surface electromyography (sEMG) signal, accelerometer, gyroscope, magnetometer data, and various combinations thereof

  • We examined the case of including both EMG and Inertial measurement (IM) information from an optimally selected subset of sensors

Read more

Summary

Introduction

Myoelectric pattern recognition systems can decode movement intention to drive upper-limb prostheses. Despite recent advances in academic research, the commercial adoption of such systems remains low This limitation is mainly due to the lack of classification robustness and a simultaneous requirement for a large number of electromyogram (EMG) electrodes. We propose to address these two issues by using a multi-modal approach which combines surface electromyography (sEMG) with inertial measurements (IMs) and an appropriate training data collection paradigm. We demonstrate that this can significantly improve classification performance as compared to conventional techniques exclusively based on sEMG signals. Pattern recognition-based systems have been very successful in decoding movement intent and have recently found their way into commercial products

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.