Abstract

As a promising technology to improve the computation experience for mobile devices, mobile edge computing (MEC) is becoming an emerging paradigm to meet the tremendous increasing computation demands. In this paper, a mobile edge computing system consisting of multiple mobile devices with energy harvesting and an edge server is considered. Specifically, multiple devices decide the offloading ratio and local computation capacity, which are both in continuous values. Each device equips a task load queue and energy harvesting, which increases the system dynamics and leads to the time-dependence of the optimal offloading decision. In order to minimize the sum cost of the execution time and energy consumption in the long-term, we develop a continuous control based deep reinforcement learning algorithm for computation offloading. Utilizing the actor-critic learning approach, we propose a centralized learning policy for each device. By incorporating the states of other devices with centralized learning, the proposed method learns to coordinate among all devices. Simulation results validate the effectiveness of our proposed algorithm, which demonstrates superior generalization ability and achieves a better performance compared with discrete decision based deep reinforcement learning methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.