Gesture recognition has found versatile applications in natural human–computer interaction (HCI). Compared with traditional camera-based or wearable sensors-based solutions, gesture recognition using the millimeter wave (mmWave) radar has attracted growing attention for its characteristics of contact-free, privacy-preserving and less environment-dependence. Recently, most of studies adopted one of the Range Doppler Image (RDI), Range Angle Image (RAI), Doppler Angle Image (DAI) or Micro-Doppler Spectrogram extracted from the raw radar signal as the input of a deep neural network to realize gesture recognition. However, the effectiveness of these four inputs in gesture recognition has attracted little attention so far. Moreover, the lack of large amounts of labeled data restricts the performance of traditional supervised learning network. In this paper, we first conducted extensive experiments to compare the effectiveness of these four inputs in the gesture recognition, respectively. Then we proposed a semi-supervised leaning framework by utilizing few labeled data in the source domain and large amounts of unlabeled data in the target domain. Specially, we combine the ∏-model and some specific data augmentation tricks on the mmWave signal to realize the domain-independent gesture recognition. Extensive experiments on a public mmWave gesture dataset demonstrate the superior effectiveness of the proposed system.
Read full abstract