Abstract

Mobile edge computing (MEC) offers an opportunity for devices relying on wireless power transfer (WPT), to accomplish computationally demanding tasks. Such WPT-powered MEC systems have yet to be optimized for long-term efficiency, due to random and changing task demands and wireless channel states of the devices. This paper presents an augmented two-staged deep Q-network (DQN), referred to as “TS-DQN”, for online optimization of WPT-powered MEC systems, where the WPT, offloading schedule, channel allocation, and the CPU configurations of the edge server and devices are jointly optimized to minimize the long-term average energy requirement of the systems. The key idea is to design a DQN for learning the channel allocation and task admission, while the WPT, offloading time and CPU configurations are efficiently optimized to precisely evaluate the reward of the DQN and substantially reduce its action space. Another important aspect is that a new action generation method is developed to expand and diversify the actions of the DQN, further accelerating its convergence. As validated by simulations, the proposed TS-DQN is much more energy efficient and converges much faster, than its potential alternative directly using the state-of-the-art Deep Deterministic Policy Gradient algorithm to learn all decision variables.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call