Abstract

Working towards the development of robust motion recognition systems for assistive technology control, the widespread approach has been to use a plethora of, often times, multi-modal sensors. In this paper, we develop single-sensor motion recognition systems. Utilising the peripheral nature of surface electromyography (sEMG) data acquisition, we optimise the information extracted from sEMG sensors. This allows the reduction in sEMG sensors or provision of contingencies in a system with redundancies. In particular, we process the sEMG readings captured at the trapezius descendens and platysma muscles. We demonstrate that sEMG readings captured at one muscle contain distinct information on movements or contractions of other agonists. We used the trapezius and platysma muscle sEMG data captured in able-bodied participants and participants with tetraplegia to classify shoulder movements and platysma contractions using white-box supervised learning algorithms. Using the trapezius sensor, shoulder raise is classified with an accuracy of 99%. Implementing subject-specific multi-class classification, shoulder raise, shoulder forward and shoulder backward are classified with a 94% accuracy amongst object raise and shoulder raise-and-hold data in able bodied adults. A three-way classification of the platysma sensor data captured with participants with tetraplegia achieves a 95% accuracy on platysma contraction and shoulder raise detection.

Highlights

  • Human–machine interfaces (HMI) are a crucial part of assistive and rehabilitation devices such as prosthetic limbs, exoskeletons or neuroprosthetic solutions

  • Multi-class classification is attempted where the data from one surface electromyography (sEMG) sensor are used to detect multiple actions. This leads up to the classification of both shoulder raise (SR) and platysma contraction (PC) in a participant with tetraplegia using the sensor placed on the platysma muscle

  • We have demonstrated how single-sEMG sensor systems can be used to recognise multiple movements

Read more

Summary

Introduction

Human–machine interfaces (HMI) are a crucial part of assistive and rehabilitation devices such as prosthetic limbs, exoskeletons or neuroprosthetic solutions. To activate an assistive device, a system may employ speech recognition [3], eye tracking [4,5], kinematic data [6,7], push-buttons [8] or biological signals such as electromyography [9,10]. A number of systems employ multimodal strategies, such as measurements of muscle activation in parallel with joint kinematics [11–13]. This provides diverse information for the decision-making algorithm and control input. There has so far been little improvement in functionality, intuitiveness and performance in those solutions when used in everyday life. The structured environments in which the majority of the solutions proposed throughout the years have been evaluated fall short in replicating real-life challenges [14]. The classification accuracies and the sophistication of the methods used have not yet translated to the real world, while their practicality and calibration needs are unrealistic

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call