Abstract

Human activity recognition has a wide range of application prospects and research significance in intelligent monitoring, assisted driving and human-computer interaction, such as intelligent monitoring of the elderly living alone, warning of dangerous behaviors of drivers and development of somatosensory games. Traditionally, human activity recognition is realized by cameras or wearable devices. However, in privacy-sensitive areas such as wards and cars, users may not be willing to share too many private videos. In this paper, we use millimeterwave radar to collect point clouds of human activities, design a novel graph neural network MMPoint-GNN with dynamic edges for the first time to process sparse point clouds, and combine it with Bidirectional LSTM to build a human activity recognition framework. We transform the logic operation into a differentiable function by edge selection network, and achieve the dynamic edge selection in MMPoint-GNN. Finally, we evaluate our method by comparing it with other methods on MMActivity dataset and MMGesture dataset. The results show that MMPoint-GNN outperforms all other baselines. The code is available at https://github.com/gongpx20069/mmRadar_for_HAR_VS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.