Abstract
The integration of edge and cloud computing is an emerging collaborative technology for Artificial intelligence (AI) task processing. This paper proposes a new two-timescale online control framework to jointly optimize energy consumption of an edge-cloud collaborative intelligent computing system and AI task response delay. The key idea is to apply dual-agent deep reinforcement learning approach, where Deep Deterministic Policy Gradient (DDPG) agents operate in a coarse-grained timescale to determine edge computation resource configuration, while Deep Q-Network (DQN) agents operate in a fine-grained timescale to conduct AI task offloading. The DDPG and DQN agents are bridged through the sharing reward function. The results show that the proposed approach is able to reduce the energy consumption by up to 46.5%, and decreases the task response latency by about a half, as compared to other alternatives. Our approach can save energy cost for a new generation of edge cloud service providers.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.