Abstract

Human activity recognition (HAR) is a crucial detection technique extensively employed in various contexts demanding accurate identification of human actions. Currently, mainstream HAR approaches frequently rely on sequential data generated by wearable sensors, with a primary focus on extracting feature representations that align with various human activities. Nevertheless, complex and diverse human activities encompass various kinds of features and substantial variations in the features emphasized across different activities, making it challenging for current artificial intelligence methods to achieve accurate recognition. To address this challenge, this paper introduces a network framework called the Multi-Feature Combining Attention Neural Network (MFCANN). This framework incorporates diverse feature extraction, along with both local and global feature attention mechanisms. It is composed of stacked Multi-Feature Combining Attention Blocks (MFCAB) designed by us. In contrast to traditional convolutional methods, MFCAB parallelly stacks various convolutional components, enabling the extraction of a broader range of features from human activity data. Additionally, we propose the Intra-Module Attention Block (Intra-MAB) and the Inter-Module Attention Block (Inter-MAB), which simultaneously focus on local fine-grained features within module feature maps and global distinguishing features across module feature maps. This aims to achieve more targeted feature learning, reinforcing the network’s ability to distinguish between different human activities. Abundant experimental results demonstrate that the proposed MFCANN in this paper outperforms current mainstream deep learning algorithms in HAR tasks, achieving recognition accuracies of 0.9813, 0.9324, and 0.9930 on the UCI-HAR, USC-HAD, and RealWorld datasets respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call