Abstract

AbstractIn mobile edge computing (MEC) systems, network entities and mobile devices need to make decisions to enable efficient use of network and computational resources. Such decision making can be challenging because the environment in MEC systems can be complex and involve time-varying system dynamics. To address such challenges, deep reinforcement learning (DRL) emerges as a promising method. It enables agents (e.g., network entities, mobile devices) to learn the optimal decision-making policy through interacting with the environment. In this chapter, we describe how DRL can be incorporated into MEC systems for improving the system performance. We first give an overview of DRL techniques. Then, we present a case study on the task offloading problem in MEC systems. In particular, we focus on the unknown and time-varying load level dynamics at the edge nodes and formulate a task offloading problem for minimizing the task delay and the ratio of dropped tasks. We propose a deep Q-learning-based algorithm that enables the mobile devices to make their task offloading decisions in a decentralized fashion with local information. This algorithm incorporates double deep Q-network (DQN) and dueling DQN techniques for enhancing the algorithm performance. Simulation results demonstrate that the proposed algorithm can reduce the task delay and ratio of dropped tasks significantly when compared with the existing methods. Finally, we outline several challenges and future research directions.KeywordsDeep reinforcement learningDeep Q-learningMobile edge computingTask offloadingResource allocation

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.