Abstract
Electromyography-based wearable biosensors are used for prosthetic control. Machine learning prosthetic controllers are based on classification and regression models. The advantage of the regression approach is that it permits us to obtain a smoother and more natural controller. However, the existing training methods for regression-based solutions is the same as the training protocol used in the classification approach, where only a finite set of movements are trained. In this paper, we present a novel training protocol for myoelectric regression-based solutions that include a feedback term that allows us to explore more than a finite set of movements and is automatically adjusted according to real-time performance of the subject during the training session. Consequently, the algorithm distributes the training time efficiently, focusing on the movements where the performance is worse and optimizing the training for each user. We tested and compared the existing and new training strategies in 20 able-bodied participants and 4 amputees. The results show that the novel training procedure autonomously produces a better training session. As a result, the new controller outperforms the one trained with the existing method: for the able-bodied participants, the average number of targets hit is increased from 86% to 95% and the path efficiency from 40% to 84%, while for the subjects with limb deficiencies, the completion rate is increased from 58% to 69% and the path efficiency from 24% to 56%.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.