Abstract

Deep learning provides new opportunities for mobile applications to achieve higher performance than before. Rather, the deep learning implementation on mobile device today is largely demanding on expensive resource overheads, imposes a significant burden on the battery life and limited memory space. Existing methods either utilize cloud or edge infrastructure that require to upload user data, however, resulting in a risk of privacy leakage and large data transfers; or adopt compressed deep models, nevertheless, downgrading the algorithm accuracy. This paper provides DeepShark, a platform to enable mobile devices with the ability of flexible resource allocation in using commercial-off-the-shelf (COTS) deep learning systems. Compared to existing approaches, DeepShark seeks a balanced point between time and memory efficiency by user requirements, breaks down sophisticated deep model into code block stream and incrementally executes such blocks on system-on-chip (SoC). Thus, DeepShark requires significantly less memory space on mobile device and achieves the default accuracy. In addition, all referred user data of model processing is handled locally, thus to avoid unnecessary data transfer and network latency. DeepShark is now developed on two COTS deep learning systems, i.e., Caffe and TensorFlow. The experimental evaluations demonstrate its effectiveness in the aspects of memory space and energy cost.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.