Abstract

Hand gesture recognition using surface electromyography (sEMG) has been one of the most efficient motion analysis techniques in human–computer interaction in the last few decades. In particular, multichannel sEMG techniques have achieved stable performance in hand gesture recognition. However, the general solution of collecting and labeling large data manually leads to time-consuming implementation. A novel learning method is therefore needed to facilitate efficient data collection and preprocessing. In this paper, a novel autonomous learning framework is proposed to integrate the benefits of both depth vision and EMG signals, which automatically label the class of collected EMG data using depth information. It then utilizes a multiple layer neural network (MNN) classifier to achieve real-time recognition of the hand gestures using only the sEMG. The overall framework is demonstrated in an augmented reality application by the recognition of 10 hand gestures using the Myo armband and an HTC VIVE PRO. The results show prominent performance by introducing depth information for real-time data labeling.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call