Abstract

AbstractWith the rapid development of the Internet of Things and the improvement of computing power, edge‐computing becomes an emergency computing paradigm that communicates between the terminal and cloud. One of the most representative works of edge‐computing is to achieve human‐activity recognition at the edge side, as it has lower latency and it could reduce transmission costs, compared with processing at the cloud side. However, existing approaches have many drawbacks: (1) they can merely recognize separated actions as it is incompetence for continuous activity recognition; and (2) they are not robust to action transformation and environmental noise due to the value‐feature‐based matching strategy. In this paper, we propose HCAR, a structure‐feature–based human continuous activity recognition system, which is insensitive to action transformation and environmental noise. Firstly, we leverage word2vec to embed the CSI sequences to CSI value space. Secondly, we select representative features from the embedded vectors and use HMM‐LDA to cluster them into different action categories. Lastly, for each new coming sequence, we calculate the Hellinger distance and bi‐modality coefficient to different categories and then identify the corresponding action(s). We implement HCAR by Intel 5300 NIC to evaluate the activity recognition precision in different cases. The experiments show that HCAR can recognize actions corresponding to the unsegmented CSI sequence with high accuracy, ie, >90%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.