Abstract

sEMG-based gesture recognition is widely applied in human-machine interaction system by its unique advantages. However, the accuracy of recognition drops significantly as electrodes shift. Besides, in applications such as VR, virtual hands should be shown in reasonable posture by self-calibration. We propose an armband fusing sEMG and IMU with autonomously adjustable gain, and an extended spatial transformer convolutional neural network (EST-CNN) with feature enhanced pretreatment (FEP) to accomplish both gesture recognition and self-calibration via a one-shot processing. Different from anthropogenic calibration methods, spatial transformer layers (STL) in EST-CNN automatically learn the transformation relation, and explicitly express the rotational angle for coarse correction. Due to the shape change of feature pattern as rotational shift, we design the fine tuning layer (FTL) which is able to regulate rotational angle within 45°. By combining STL, FTL and IMU-based posture, EST-CNN is able to calculate non-discretized angle, and achieves high resolution of posture estimation based on sparse sEMG electrodes. Experiments collect frequently-used 3 gestures of 4 subjects in equidistant angles to evaluate EST-CNN. The results under electrodes shift show that the accuracy of gesture recognition is 97.06%, which is 5.81% higher than CNN, the fitness between estimated and true rotational angle is 99.44%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.