Abstract

In recent years, continuous growing interests have been seen in bringing artificial intelligence capabilities to mobile devices. However, the related work still faces several issues, such as constrained computation and memory resources, power drain, and thermal limitation. To develop deep learning (DL) algorithms on mobile devices, we need to understand their behaviors. In this article, we explore the architectural behaviors of some mainstream DL frameworks on mobile devices by performing a comprehensive characterization of performance, accuracy, energy efficiency, and thermal behaviors. We experimentally choose four model compression methods to perform on networks and in addition, analyze the related impact on the nodes amount, memory, execution time, model size, inference time, energy consumption, and thermal distribution. With insights into DL-based mobile application characteristics, we hope to guide the design of future smartphone platforms for lower energy consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call