Abstract
Recently, it has been proven that targeting motor impairments as early as possible while using wearable mechatronic devices for assisted therapy can improve rehabilitation outcomes. However, despite the advanced progress on control methods for wearable mechatronic devices, the need for a more natural interface that allows for better control remains. To address this issue, electromyography (EMG)-based gesture recognition systems have been studied as a potential solution for human–machine interface applications. Recent studies have focused on developing user-independent gesture recognition interfaces to reduce calibration times for new users. Unfortunately, given the stochastic nature of EMG signals, the performance of these interfaces is negatively impacted. To address this issue, this work presents a user-independent gesture classification method based on a sensor fusion technique that combines EMG data and inertial measurement unit (IMU) data. The Myo Armband was used to measure muscle activity and motion data from healthy subjects. Participants were asked to perform seven types of gestures in four different arm positions while using the Myo on their dominant limb. Data obtained from 22 participants were used to classify the gestures using three different classification methods. Overall, average classification accuracies in the range of 67.5–84.6% were obtained, with the Adaptive Least-Squares Support Vector Machine model obtaining accuracies as high as 92.9%. These results suggest that by using the proposed sensor fusion approach, it is possible to achieve a more natural interface that allows better control of wearable mechatronic devices during robot assisted therapies.
Highlights
Accepted: 6 February 2022Recently, robot rehabilitation therapy has shown its greatest potential as a complementary method to traditional rehabilitation techniques
This paper focuses on the enhancement of multiple user-independent classification models —including the model presented in the preliminary study— using sensor fusion, and discusses the performance of the models when compared against each other
The accuracy outcomes of each user-independent classification method were obtained after classifying data from the seven gestures using two different sensor modalities: EMG, and EMG and inertial measurement unit (IMU)
Summary
Accepted: 6 February 2022Recently, robot rehabilitation therapy has shown its greatest potential as a complementary method to traditional rehabilitation techniques. In order for robot-assisted therapies to be effective, patients must feel the interaction with the robotic device in a way that feels natural to them, while at the same time, receiving assistance from the robot based on their performance during the rehabilitation session [1] To address this issue, gesture recognition has been considered as a possible solution for human–machine interface applications [2,3], being electromyography (EMG) the type of signal most commonly used in such applications [4]. The deployment of robot-assisted therapies would be facilitated by using a system that can be adapted to new patients [5,6] This technique could be extended to reduce calibration times of each user by adapting an existing classification model based on the improvement obtained during the rehabilitation sessions
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.