Abstract

With the advent of Tiny Machine Learning (tinyML), it is increasingly feasible to deploy optimized ML models on constrained battery-less Internet of Things (IoT) devices with minimal energy availability. Due to the unpredictable and dynamic harvesting environment, successfully running tinyML on battery-less devices is still challenging. In this paper, we present the energy-aware deployment and management of tinyML algorithms and application tasks on battery-less IoT devices. We study the trade-offs between different inference strategies, analyzing under which circumstances it is better to make the decision locally or send the data to the Cloud where the heavy-weight ML model is deployed, respecting energy, accuracy, and time constraints. To decide which of these two options is more optimal and can satisfy all constraints, we define an energy-aware tinyML optimization algorithm. Our approach is evaluated based on real experiments with a prototype for battery-less person detection, which considers two different environments: (i) a controllable setup with artificial light, and (ii) a dynamic harvesting environment based on natural light. Our results show that the local inference strategy performs best in terms of execution speed when a controllable harvesting environment is considered. It can execute 3 times as frequently as remote inference at a harvesting current of 2 mA and using a capacitor of 1.5 F. In a realistic harvesting scenario with natural light and making use of the energy-aware optimization algorithm, the device will favor remote inference under high light conditions due to the better accuracy of the Cloud-based model. Otherwise, it switches to local inference.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call