Abstract

Mobile edge computing (MEC) as a promising technology enables it to satisfy ever-increasing demands for low-latency and ultra-reliable services. However, due to the limitations of computing capability and the dynamic network environment, it is challenging to process massive data with low latency. In this paper, we consider a dynamic MEC network with a high-performance edge server, multiple time-varying channels, and multiple mobile devices. We aim to find a policy that can maximize the processing success rate of computational tasks and the fairness index of the system while minimizing the process delays. To this end, we propose a heuristic-assisted multiagent reinforcement learning (RL) based framework to realize the joint optimization of computation offloading and resource allocation. On one hand, heuristic search is introduced in this framework to find a better resource allocation policy in edge servers and further assist the multiagent RL algorithm to determine offloading policy in mobile devices. On the other hand, a novel parametrized multiagent RL algorithm based on soft actor-critic (SAC) is also proposed to broaden the effectiveness and availability of the proposed framework. Simulation results of the average cumulative reward, success rate, processing delay, and fairness index fully verify the superiority of the proposed framework and algorithm for supporting this problem.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.