Abstract

Radar-based dynamic hand gesture classification has been an active research field in recent years. Deep learning methods using radar sensors are widely used to classify dynamic hand gestures. The existing deep learning methods need the overhead of performing the slow- and fast-time Fourier transforms to obtain the various spectrum images as the input of the deep convolutional neural network (DCNN). In this article, a dynamic hand gesture classification method based on the multichannel radar using a multistream fusion 1-D convolutional neural network (MSF-1-D-CNN) is proposed. The proposed MSF-1-D-CNN has four branches in parallel, and each branch has the inception modules to extract features from the raw echo data of each receiving antenna. Then, the extracted features from each branch are concatenated, and the long short-term memory (LSTM) layer is utilized to extract the temporal characteristic of the concatenated features. Finally, the dense layer with the softmax function is utilized to obtain the classification result of hand gestures. The experimental results show that, compared with existing methods, the proposed method using multichannel radar data can provide improved classification accuracy when the hand has a large incident angle and distance from the radar. Moreover, the proposed method can also reduce the network parameters and computational complexity, which has the potential to be implemented on commercial embedded systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call