Abstract

Human activity recognition (HAR) is one of the key applications of health monitoring that requires continuous use of wearable devices to track daily activities. This article proposes an adaptive convolutional neural network for energy-efficient HAR (AHAR) suitable for low-power edge devices. Unlike traditional adaptive (early-exit) architecture that makes the early-exit decision based on classification confidence, AHAR proposes a novel adaptive architecture that uses an output block predictor to select a portion of the baseline architecture to use during the inference phase. The experimental results show that traditional adaptive architecture suffer from performance loss whereas our adaptive architecture provides similar or better performance as the baseline one while being energy efficient. We validate our methodology in classifying locomotion activities from two data sets—1) Opportunity and 2) w-HAR. Compared to the fog/cloud computing approaches for the Opportunity data set, our baseline and adaptive architectures show a comparable weighted F1 score of 91.79%, and 91.57%, respectively. For the w-HAR data set, our baseline and adaptive architectures outperform the state-of-the-art works with a weighted F1 score of 97.55%, and 97.64%, respectively. Evaluation on real hardware shows that our baseline architecture is significantly energy efficient ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$422.38\times $ </tex-math></inline-formula> less) and memory-efficient ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$14.29\times $ </tex-math></inline-formula> less) compared to the works on the Opportunity data set. For the w-HAR data set, our baseline architecture requires <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.04\times $ </tex-math></inline-formula> less energy and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$2.18\times $ </tex-math></inline-formula> less memory compared to the state-of-the-art work. Moreover, experimental results show that our adaptive architecture is 12.32% (Opportunity) and 11.14% (w-HAR) energy efficient than our baseline while providing similar (Opportunity) or better (w-HAR) performance with no significant memory overhead.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call