Abstract

Human activity recognition (HAR) utilizing wearable sensors is a prominent topic in academia and has been widely used in health monitoring, medical treatment, motion analysis, and other fields. Although HAR technology based on deep learning has made some progress, there are still some problems, such as incomplete feature extraction and low utilization rate of features, which may result in false recognition. To settle the problems, we propose a novel deep multifeature extraction framework based on attention mechanism (DMEFAM), which includes the temporal attention feature extraction layer (TAFEL), the channel and spatial attention feature extraction layer (CSAFEL), and the output layer. The TAFEL is comprised of bidirectional gated recurrent unit (Bi-GRU) and self-attention (SA) mechanism, and the CSAFEL is comprised of the convolutional block attention module (CBAM) and the residual network-18 (ResNet-18). In this framework, the combination of deep learning neural networks, SA mechanism, and CBAM assigns different important degrees to features, increases the diversities of feature extraction, and improves the accuracy of HAR. To improve the practicability of the proposed framework, a daily and aggressive activity dataset (DAAD) is collected by our laboratory for performance evaluation. The experiments are conducted on the wireless sensor data mining laboratory (WISDM) dataset, the University of California Irvine-HAR (UCI-HAR) dataset, and the collected DAAD dataset. The results show that the proposed DMEFAM can upgrade the recognition performance effectually compared to other advanced HAR frameworks, with the recognition accuracies of 97.9%, 96.0%, and 99.2% on the WISDM dataset, the UCI-HAR dataset, and the DAAD dataset, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call