Abstract

Millimeter wave (mmWave) sensing promises to enable contactless and high-precision “in-air” gesture-based human–computer interaction (HCI). While previous works have demonstrated its feasibility, they require tedious gesture collecting for person-independent recognition and they operate in an off-line mode without considering practical issues, such as segmenting gesture and recognition latency. In this work, we propose <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">M-Gesture</i> , a person-independent real-time mmWave gesture recognition solution. We first build a compact gesture model with a custom-designed neural network to distill the unique features underlying each gesture, while suppressing personalized discrepancy across different users without extra collection and retraining. Furthermore, we design a system status transition (SST) to decide when a gesture begins and ends, which enables automatic gesture segmentation and hence real-time recognition. We prototype <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">M-Gesture</i> on a commodity mmWave sensor and demonstrate its advantages using two practical applications: 1) a contactless music player and 2) camera. Extensive experiments and user studies show that <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">M-Gesture</i> has an accuracy of 99% and a short response latency within 25 ms. Moreover, we also collect and release a comprehensive mmWave gesture data set consisting of 54 620 instances from 144 persons, which may have an independent value of facilitating future research.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call