Abstract

Sensor-based human activity recognition aims to classify human activities or behaviors according to the data from wearable or embedded sensors, leading to a new direction in the field of Artificial Intelligence. When the activities become high-level and sophisticated, such as in the multiple technical skills of playing badminton, it is usually a challenging task due to the difficulty of feature extraction from the sensor data. As a kind of end-to-end approach, deep neural networks have the capacity of automatic feature learning and extracting. However, most current studies on sensor-based badminton activity recognition adopt CNN-based architectures, which lack the ability of capturing temporal information and global signal comprehension. To overcome these shortcomings, we propose a deep learning framework which combines the convolutional layers, LSTM structure, and self-attention mechanism together. Specifically, this framework can automatically extract the local features of the sensor signals in time domain, take the LSTM structure for processing the badminton activity data, and focus attention on the information that is essential to the badminton activity recognition task. It is demonstrated by the experimental results on an actual badminton single sensor dataset that our proposed framework has obtained a badminton activity recognition (37 classes) accuracy of 97.83%, which outperforms the existing methods, and also has the advantages of lower training time and faster convergence.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call