Abstract

The coming B5G/6 G will bring a huge amount of challenging applications with zettabytes of information. Computing power network (CPN) can provide a promising solution by accelerating the proliferation of computing power from a set of data centers to a multitude of network edges. However, in dealing with resource-hungry and real-time applications, most of the existing research doesn't make good use of idle computing and caching resources, and is barely possible to evaluate the contribution of the individual providing the resource. Therefore, we propose the in-network pooling framework, derived from a novel modified deep reinforcement learning (DRL) scheme, in which the dynamic resource pool (RP) is first modeled to make full use of the idle network resources, then the jointly computing and caching problem is formulated as the maximization of long-term system utility. Finally, Attention-based Proximal Policy Optimization (APPO) is employed to solve the problem. Particularly, the integrated attention mechanism reveals the evaluation of the different RPs' contributions to the learning process. Experimental results also show the priorities of the proposed algorithm and outperform the other existing algorithms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.