Abstract

Recently deep learning applications are thriving on edge and mobile computing scenarios, due to the concerns of latency constraints, data security and privacy, and other considerations. However, because of the limitation of power delivery, battery lifetime and computation resource, offering real-time neural network inference ability has to resort to the specialized energy-efficient architecture, and sometimes the coordination between the edge devices and the powerful cloud or fog facilities. This work investigates a realistic scenario when an on-line scheduler is needed to meet the requirement of latency even when the edge computing resources and communication speed are dynamically fluctuating, while protecting the privacy of users as well. It also leverages the approximate computing feature of neural networks and actively trade-off excessive neural network propagation paths for latency guarantee even when local resource provision is unstable. Combining neural network approximation and dynamic scheduling, the real-time deep learning system could adapt to different requirements of latency/accuracy and the resource fluctuation of mobile-cloud applications. Experimental results also demonstrate that the proposed scheduler significantly improves the energy efficiency of real-time neural networks on edge devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call