Abstract
Complex activities refer to users' activities performed in their daily lives (e.g., having dinner, shopping, etc.). Complex activity recognition is a valuable issue in wearable and mobile computing. The time-series sensory data from multimodal sensors have sophisticated relationships to characterize the complex activities (e.g., intra-sensor relationships, inter-sensor relationships, and temporal relationships), making the traditional methods based on manually designed features ineffective. To this end, we propose HConvRNN, an end-to-end deep neural network for complex activity recognition using multimodal sensors by integrating convolutional neural network (CNN) and recurrent neural network (RNN). To be specific, it uses a hierarchical CNN to exploit the intra-sensor relationships among similar sensors and merge intra-sensor relationships of different sensor modalities into inter-sensor relationships, and uses a RNN to model the temporal relationships of signal dynamics. The experiments based on real-world datasets show that HConvRNN outperforms the existing complex activity recognition methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.