Abstract

Recent advancements in the Internet of Medical Things (IoMT) have revolutionized the healthcare sector, making it an active research area in the academic and industrial sectors. Following these advances, an automatic Human Activity Recognition (HAR) is now integrated into the IoMT, facilitating remote patient monitoring systems for smart healthcare. However, implementing HAR via computer vision is intricate due to complex spatiotemporal patterns, single stream fusion, and clutter backgrounds. Mainstream approaches practice pre-trained CNN model, which extract non-salient features due to their generalized weight optimization and limited discriminative feature fusion. In addition, their sequential models have inadequate performance in complex scenarios due to the vanishing gradients encountered during backpropagation across multiple layers. In response to these challenges, we propose a multiscale feature fusion framework for both indoor and outdoor environments to enhance HAR in healthcare monitoring systems, which is mainly composed of two stages: First, the proposed Human Centric Attentional Fusion (HCAF) network is fused with the intermediate convolutional feature of lightweight MobileNetV3 backbone to enriches spatial learning capabilities for accurate HAR. Next, a Deep Multiscale Features Fusion (DMFF) network is proposed that enhanced the long-range temporal dependencies by redesigning the traditional bidirectional LSTM network into a residual fashion followed by Sequential Multihead Attention (SMA) to eliminate non-relevant information and optimized spatiotemporal feature vectors. The performance of the proposed fusion model is evaluated on benchmark healthcare and general activity datasets. In the healthcare, we used Multiple Camera Fall and UR Fall Detection datasets that achieved 99.941% and 100% accuracy. Despite this, our fusion strategy is rigorously evaluated over three challenging general HAR datasets, including HMDB51,UCF101, and UCF50, demonstrating 74.942%, 97.337%, and 96.156% superior performance compared to State-of-The-Art (SOTA) methods. The run time analysis shows that the proposed method is 2x times faster than the existing methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.