Abstract
Developing a human activity recognition (HAR) system for employees is essential to incorporate intelligence into smart office environments, enabling various human-centered applications to enhance employees’ well-being. Although remarkable progress has been made for the HAR in the smart office, several issues still exist, including lacking a privacy-preserving and unobtrusive method and demanding enhanced generalization performance across users. Therefore, a novel privacy-preserving HAR method based on multimodal sensors is investigated in this study, containing an infrared array sensor, a sensing chair, and a triaxial accelerometer built in a smartphone. The effectiveness of different multimodal combinations is examined. Moreover, a deep learning model is developed for multimodal data fusion to enhance the generalization performances across users. The model contains a residual 3D convolutional neural network (CNN) and 1D CNN for learning spatial–temporal feature representation of different modalities. Additionally, external memory units and an adaptive decision fusion operation are utilized for multimodal data fusion. Finally, extensive experiments are conducted to examine the performance of the proposed model using a self-collected dataset and the leave-one-subject-out cross-validation approach. The results verify the effectiveness of the proposed model.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have