Abstract
Traditional solutions for human action recognition usually rely on sensor or video methods. However, these methods have some limitations, such as inconvenient portability, light intensity influence, privacy protection, etc. In this paper, an RFID-based non-wearable human action recognition scheme is proposed. In order to reduce the occlusion effect of the human body on the signal and increase the diversity of the reflected signal, a tags array is constructed. The data of phase and RSSI are fused as feature data to enhance the diversity of data. Furthermore, a combined processing method is proposed to eliminate thermal noise generated by the equipment and reduce the interference caused by the environment. Then, an action segmentation algorithm is designed to align the RF signals of human action. Finally, an efficient human action signal classification model is constructed using the spatiotemporal graph convolutional neural network (STGCN). Extensive experiments demonstrate that the overall accuracy rate of the system for human action recognition is 92.8%. Compared with the comparative mainstream recognition algorithms, STGCN shows better classification performance in terms of identification precision. In addition, multimodal RFID data fusion also improves the accuracy of identification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: IEEE Internet of Things Journal
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.