Abstract

For the next generation of communication systems, low latency is an urging requirement to satisfy the increasing computation requires. In response, mobile edge computing (MEC) with energy harvesting (EH) is a promising technology to achieve sustained improvement of the computation experience. However, the frequently varied harvested energy, coupled with variable computing tasks and changing computation capacity of servers, results in the high dynamics of the computation offloading problem. In order to get satisfactory computation quality for such a high dynamic offloading problem, devices should learn to make multiple continuous and discrete actions when optimizing the system performance, such as latency, energy efficiency, etc. In this paper, we propose a continuous-discrete hybrid decision based deep reinforcement learning algorithm for dynamic computation offloading. Specifically, the actor outputs continuous actions (offloading ratio and local computation capacity) corresponding to every server. On the other hand, the critic outputs the discrete action (server selection) while also evaluates the performance of the actor for neural network updating. Simulation results validate the effectiveness of our proposed algorithm, which demonstrates superior generalization ability and achieves better performance compared with the discrete decision based deep reinforcement learning methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.