Abstract

Sixth generation mobile networks (6G) may experience a huge evolution on vertical industry scenarios, where deep edge networks ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$\mathbf{D}\mathrm{E}\mathrm{N}\mathrm{s}$</tex-math></inline-formula> ) become an important network structure for the intelligent resource management of computing, caching and communications (3 C). In this paper, we propose two pervasive scenarios including single edge scene and multiple edge scenes for deep integration of wireless communications and computation depending on real-time adaptive collaboration. Specifically, in the single edge scenario, a novel deep reinforcement learning (DRL)-based framework is invoked for collaboratively optimizing the task scheduling, transmission power and CPU cycle frequency under metabolic channel conditions. Meanwhile, in order to alleviate interference in multiple edge scenarios, we propose a multi-agent aided deep deterministic policy gradient (MADDPG) algorithm to minimize total energy consumption and latency. Numerical experiments demonstrate these methods have substantial performance improvement in terms of saving system’s total overhead (i.e., energy consumption and delay) upon considering the task scheduling, transmission power, channel mutual interference and CPU cycle frequency against other conventional benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call