Abstract

Many emerging multimedia mobile applications rely heavily upon image recognition of both static images and live video streams. Image recognition is commonly achieved using deep neural networks (DNNs) which can achieve high accuracy but also incur significant computation latency and energy con-sumption on resource-constrained smartphones. Recent efforts addressing these issues include cloud offloading and reducing the complexity of the DNNs, which, however, introduce increased network latency or reduced accuracy. In-memory caching has also been explored to assess the similarity of images as opposed to exact matching. However, such approximate caching systems often treat devices as static nodes, and do not fully utilize the mobile and collaborative nature of smartphones without outside infrastructure. Another consequence of treating nodes as static is the necessity of cache sizes larger than what is feasible for individual mobile applications. In this paper we introduce Co-Cache, a in-memory caching paradigm that supports infrastructure-less collaborative compu-tation reuse in smartphone image recognition. Co-Cache utilizes the inertial movement of smartphones, the locality inherent in video streams, as well as information from nearby, peer-to-peer devices to maximize the computation reuse opportunities in mobile image recognition. Compared to other caching systems, our extensive evaluation shows that Co-Cache can reduce the required number of cache entries by 50–70 % while lowering the average latency of standard image recognition applications by up to 94 % with minimal loss of recognition accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call