Gesture recognition, the basis of human–computer interaction (HCI), is a significant component for the development of smart home, VR, and senior care management. Most gesture recognition methods still depend on sensors worn by the user or video-based gestures for recognition, can be used for fine-grained gesture recognition. our paper implements a gesture recognition method that is independent of environment and gesture drawing direction, and it achieves gesture recognition classification by using small sample data. Wi-NN, proposed in this study, does not require the user to wear additional device. In this case, channel state information (CSI) extracted from Wi-Fi signal is used to capture the action information of the human body via CSI. After pre-processing to reduce the interference of environmental noise as much as possible, clear action information is extracted using the feature extraction method based on time domain to obtain the gesture action feature data. The gathered data are integrated with the weighted k-nearest neighbor (KNN) classification recognizer for classification task. The experiment outcomes revealed that the accuracy scores of the same gesture for different users and different gestures for the same user under the same environment were 93.1% and 89.6%, respectively. The experiments in different environments also achieved good recognition results, and by comparing with other experimental methods, the experiments in this paper have better recognition results. Evidently, good classification results were generated after the original data were processed and incorporated into the weighted KNN.
Read full abstract7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access