Abstract

Recently, although radar sensors have been widely applied for hand gesture recognition (HGR) tasks, conventional radar-based HGR systems still have two major challenges. First, these systems rely on supervised learning approaches to learn gesture features, which normally require a large-scale labeled dataset to address the overfitting problem. However, the acquisition of such dataset is time-consuming. Second, the radar signature of hand movement is often influenced by micromotion caused by other body parts, which leads to distorted motion features, resulting in poor identification accuracy. To overcome these problems, we propose an unsupervised hand gesture feature learning method using the deep convolutional auto-encoder network to analyze hand gesture signal collected by a frequency modulated continuous wave (FMCW) radar sensor. First, via a convolutional encoder sub-network, input radar range profiles are transformed into lower dimensional representations. Then, the representations are expanded to reconstruct the corresponding input profiles by a deconvolutional decoder sub-network. In addition, to investigate the mechanisms of the proposed network and evaluate its performance, we conduct an in-depth study of the feature maps learned from various hand gesture experimental data and evaluate the corresponding classification performance. The results demonstrate that the proposed convolutional auto-encoder network is able to achieve high recognition accuracy with low training sample cost, which outperforms the state-of-the-art hand gesture recognition systems based on transfer learning VGGNet and fully connected-based auto-encoder network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call