Abstract

SummaryAs mobile devices become more prevalent, users tend to reassess their expectations regarding the personalization of mobile services. The data collected by a mobile device's sensors provide an opportunity to gain insight into the user's profile. Recently, deep learning has gained momentum and has become the method of choice for solving machine learning problems. Interestingly, training a deep neural network on a mobile device is often mistakenly regarded as cumbersome. For instance, several deep learning frameworks only provide a CPU‐based implementation for prediction tasks on a mobile device. In contrast to servers, a mobile computing environment imposes many domain‐specific constraints that invite us to review the general computing approach used in a deep learning framework implementation. In this paper, we propose a deep learning framework that has been specifically designed for mobile device platforms. Our approach relies on the collaboration of the multicore CPU and the integrated GPU to accelerate deep learning computation on mobile devices. Our work exploits the shared memory architecture of mobile devices to promote CPU‐GPU collaboration without any data copying. We analyze our approach with regard to three factors: performance/portability trade‐off, power efficiency, and memory management.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call