Abstract

The integration of edge and cloud computing is an emerging collaborative technology for Artificial intelligence (AI) task processing. This paper proposes a new two-timescale online control framework to jointly optimize energy consumption of an edge-cloud collaborative intelligent computing system and AI task response delay. The key idea is to apply dual-agent deep reinforcement learning approach, where Deep Deterministic Policy Gradient (DDPG) agents operate in a coarse-grained timescale to determine edge computation resource configuration, while Deep Q-Network (DQN) agents operate in a fine-grained timescale to conduct AI task offloading. The DDPG and DQN agents are bridged through the sharing reward function. The results show that the proposed approach is able to reduce the energy consumption by up to 46.5%, and decreases the task response latency by about a half, as compared to other alternatives. Our approach can save energy cost for a new generation of edge cloud service providers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call