Abstract

The Sixth Generation (6G) wireless communication aims to enable ubiquitous intelligent connectivity in future Space-Air-Ground-Ocean integrated networks, with extremely low latency and enhanced global coverage. However, the explosive growth in Internet of Things devices poses new challenges for smart devices to process the generated tremendous data with limited resources. In 6G networks, conventional mobile edge computing systems encounter serious problems to satisfy the requirements of ubiquitous computing and intelligence, with extremely high mobility, resource-limitation, and time-variability. In this paper, we propose the model of Wireless Computing Power Networks (WCPN), by jointly unifying the computing resources from both end devices and MEC servers. Furthermore, we formulate the new problem of task transfer, to optimize the allocation of computation and communication resources in WCPN. The main objective of task transfer is to minimize the execution latency and energy consumption with respect to resource limitations and task requirements. To solve the formulated problem, we propose a multi-agent Deep Reinforcement Learning (DRL) algorithm to find the optimal task transfer and resource allocation strategies. The DRL agents collaborate with others to train a global strategy model through the proposed asynchronous federated aggregation scheme. Numerical results show that the proposed scheme can improve computation efficiency, speed up convergence rate, and enhance utility performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call