Abstract

Radar-based hand gesture recognition (HGR) has attracted growing interest in human–computer interaction. A rich diversity in how people perform gestures causes a large intra-class variance, and the sample quality varies from person to person. It makes HGR more challenging to identify dynamic, complicated, and deforming hand gestures. It is urgent for the real world to explore a robust method that better identifies the gestures from non-specified users. To address the above issues, an adaptive framework is proposed for gesture recognition, and it has two main contributions. First of all, a trajectory range Doppler map (t-RDM) is obtained by non-coherent accumulating for inter-frame dependencies, and then t-RDM is enhanced to highlight the trajectory information. Taking into account different movement patterns of the gestures, a two-pathway convolutional neural network targeted for raw and enhanced t-RDMs is proposed, which independently mines discriminative information from the two t-RDMs with different salient features. Second, an adaptive individual cost (AIC) loss is proposed, aiming to establish a powerful feature representation by adaptively extracting the commonalities in variant gestures according to the sample quality. Based on a public dataset using soli radar, the proposed method is evaluated on two tasks: cross-person recognition and cross-scenario recognition. These two recognition modes require that the training set and the test set are mutually exclusive not only at the sample level but also at the source level. Extensive experiments demonstrate that the proposed method is superior to the existing approaches for alleviating the low recognition performance caused by gesture diversity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call