Abstract

This work analyses the most relevant research conducted under the mobile cloud computing paradigm to bring vision tasks supported by state-of-the-art deep convolutional neural networks closer to the end-user through collaborative intelligence. In particular, this review aims to comprehensively address collaborative inference on convolutional networks, offering the reader a detailed explanation of the main methods and technologies used to partition and deploy such models on the UE-edge-cloud continuum, which have made it possible to leverage the capabilities of resource-constrained devices to alleviate, and ideally eliminate, the traditional dependence on the cloud for high-performance computing, thereby enabling a more rational exploitation of the supporting hardware infrastructure. The paper details the technical aspects of the different frameworks designed to support these tasks, examining and comparing the mechanisms and techniques used to conceive pertinent configurations. Moreover, the study outlines the various algorithmic solutions developed for synthesizing optimal co-inference schemes, also covering the design conventions adopted and the tweaks implemented to optimize the overall performance delivered, concluding with a discussion of the specific challenges that have arisen thus far, as well as the following steps to be taken in this field.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.