Abstract

Predictive power allocation is conceived for energy-efficient video streaming over mobile networks using deep reinforcement learning. The goal is to minimize the accumulated energy consumption of each base station over a complete video streaming session under the constraint that avoids video playback interruptions. To handle the continuous state and action spaces, we resort to deep deterministic policy gradient (DDPG) algorithm for solving the formulated problem. In contrast to previous predictive power allocation policies that first predict future information with historical data and then optimize the power allocation based on the predicted information, the proposed policy operates in an on-line and end-to-end manner. By judiciously designing the action and state that only depend on slowly-varying average channel gains, we reduce the signaling overhead between the edge server and the base stations, and make it easier to learn a good policy. To further avoid playback interruption throughout the learning process and improve the convergence speed, we exploit the partially known model of the system dynamics by integrating the concepts of safety layer, post-decision state, and virtual experiences into the basic DDPG algorithm. Our simulation results show that the proposed policies converge to the optimal policy that is derived based on perfect large-scale channel prediction and outperform the first-predict-then-optimize policy in the presence of prediction errors. By harnessing the partially known model, the convergence speed can be dramatically improved. The code for reproducing the results of this article is available at https://github.com/fluidy/twc2020.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call