Abstract

Gesture recognition acts as a key enabler for user-friendly human-computer interfaces (HCI). To bridge the human-computer barrier, numerous efforts have been devoted to designing accurate fine-grained gesture recognition systems. Recent advances in wireless sensing hold promise for a ubiquitous, non-invasive and low-cost system with existing Wi-Fi infrastructures. In this paper, we propose DeepNum, which enables fine-grained finger gesture recognition with only a pair of commercial Wi-Fi devices. The key insight of DeepNum is to incorporate the quintessence of deep learning-based image processing so as to better depict the influence induced by subtle finger movements. In particular, we make multiple efforts to transfer sensitive Channel State Information (CSI) into depth radio images, including antenna selection, gesture segmentation and image construction, followed by noisy image purification using high-dimensional relations. To fulfill the restrictive size requirements of deep learning model, we propose a novel region-selection method to constrain the image size and select qualified regions with dominant color and texture features. Finally, a 7-layer Convolutional Neural Network (CNN) and SoftMax function are adopted to achieve automatic feature extraction and accurate gesture classification. Experimental results demonstrate the excellent performance of DeepNum, which recognizes 10 finger gestures with overall accuracy of 98% in three typical indoor scenarios.

Highlights

  • We present a set of novel methods to transfer signal processing into image processing, including raw Channel State Information (CSI) pre-processing, high-dimensional radio image construction, and image purification using spatial relations

  • We present a novel finger gesture recognition system, named DeepNum, the core part of which is radio image construction and image manipulation

  • In the second part, we resort to Higher-Order Singular Value Decomposition (HOSVD) for image de-noising and region selection for the input requirement of Convolutional Neural Network (CNN) model, which contributes to extracting deep features automatically and outputs classified results accurately

Read more

Summary

Introducti on

The interactions with smart sensors have advanced to an unprecedented extent in this day and age [1]. We propose a fine-grained finger gesture recognition system utilizing a deep CNN architecture, named DeepNum. Compared with our previous DNN-based version [21], DeepNum employs a more powerful architecture by extracting prominent features using multiple convolutional filters and reducing the training cost via parameter sharing, and possesses better performance. To achieve real-time gesture recognition, detecting transition points of human movements and segmenting CSI frame into slices are critical processes It is non-trivial to import varied-size segments into the CNN model directly, since the input layer requires fixed-size input. We propose a CNN-based deep learning framework to achieve automatic feature extraction and robust gesture classification without labor-intensive training efforts.

Wi-Fi-Based Gesture Recognition
Deep Learning-Based Wi-Fi Sensing
Preliminaries
Channel State Information
Motivation and Challenges
System Overview
Signal Processing
Antenna Selection
Antenna
HOSVD-Based
CNN-Based Classification
64. Assuming that we a total of padding
Experiment Configuration
Performance Evaluation
Performance
Ph tuede mpmliptulid
13. Impact
Limitations and Discussion
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call