Abstract

Human activity recognition (HAR) based on wearable devices has progressively advanced in context-aware computing like healthcare, smart homes, and industry 4.0. Since machine learning and Internet-of-Things have been in the spotlight, many deep learning-based approaches have been proposed, achieving very good performance using inertial data. In this study, we explore different joint time–frequency representations of the sensor data, in conjunction with a Convolutional Neural Network (ConvNet) to implement activity classification. We also consider the concurrent use of two different data representations in a cooperative bi-stream ConvNet configuration. Scaling of joint time–frequency representation is explored to reduce the ConvNet model size, and hence the computational complexity of the proposed approach. The classification performance and energy efficiency of the proposed method are evaluated on a real-world public HAR dataset. The performance of pairwise combinations and also, the single-stream of the two prominent representations in the joint time–frequency domain are compared to that of the state-of-the-art. The proposed method is implemented on Google’s Edge TPU platform to investigate the trade-off between the energy footprint and classification performance, which is critical in IoT applications where machine learning inference needs to be performed on resource-constrained edge devices. The highest overall classification accuracy and F1 score achieved by our method was 98.39% and 96.86%, respectively, which represents a significant improvement over the state-of-the-art.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call