Abstract

Rapidly evolving embedded applications continuously demand more functionalities and better performance under tight energy and thermal budgets, and maintaining high energy efficiency has become a significant design challenge for mobile devices. Learning-based methods are adaptive to dynamic conditions and show great potential for runtime power management. However, with the ever-increasing complexity of both hardware and software, it is a challenging issue for a learning agent to explore the state-action space sufficiently and quickly find an efficient management policy. In this paper, we propose a reinforcement learning-based multi-device collaborative power management approach to address this issue. Multiple devices with different runtime conditions can acquire related knowledge during the learning process. Efficient knowledge sharing among these devices can potentially accelerate the learning process and improve the quality of the learned policies. We integrate the proposed method with dynamic voltage and frequency scaling on the multicore processors in mobile devices. Experimental results on realistic applications show that the collaborative power management can achieve up to a $7 \times$ speedup and 10% energy reduction compared with state-of-the-art learning-based approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call