Abstract

Recently, information-centric wireless networks (ICWNs) have become a promising Internet architecture of the next generation, which allows network nodes to have computing and caching capabilities and adapt to the growing mobile data traffic in 5G high-speed communication networks. However, the design of ICWN is still faced with various challenges with respect to capacity and traffic. Therefore, mobile edge computing (MEC) and device-to-device (D2D) communications can be employed to aid offloading the core networks. This paper investigates the optimal policy for resource allocation in ICWNs by maximizing the spectrum efficiency and system capacity of the overall network. Due to unknown and stochastic properties of the wireless channel environment, this problem was modeled as a Markov decision process. In continuous-valued state and action variables, the policy gradient approach was employed to learn the optimal policy through interactions with the environment. We first recognized the communication mode according to the location of the cached content, considering whether it is D2D mode or cellular mode. Then, we adopt the Gaussian distribution as the parameterization strategy to generate continuous stochastic actions to select power. In addition, we use softmax to output channel selection to maximize system capacity and spectrum efficiency while avoiding interference to cellular users. The numerical experiments show that our learning method performs well in a D2D-enabled MEC system.

Highlights

  • In addition to advances in information and communications technology, the proliferation of smart mobile devices is undergoing unprecedented growth [1]

  • To address the challenges caused by combining resource allocation and power adaptation in information-centric wireless networks (ICWNs), a number of novel research technologies have been proposed in ICWN

  • In contrast to all existing works, in this paper, we focus on communication resource allocation with deep reinforcement learning (DRL) in D2D-enabled mobile edge computing (MEC), enabling mobile users to automatically learn allocation policies based only on their cached content and channel information

Read more

Summary

INTRODUCTION

In addition to advances in information and communications technology, the proliferation of smart mobile devices is undergoing unprecedented growth [1]. The technical issues and challenges created by the ICWN network require in-depth research and thinking, such as the high and variable latency of transmitted high-volume quantities of data to the cloud for data processing This approach causes a heavy burden on the network, while network congestion and high network demands need to be considered, such as computing, caching and communicating (3C). When considering the communication resource allocation of D2D-enabled MEC in an ICWN, we employ a novel deep reinforcement learning (DRL) approach to automatically optimize resource allocation and power control decisions.

RELATED WORK
PROBLEM FORMULATION
RESOURCE ALLOCATION ALGORITHM
DEEP REINFORCEMENT LEARNING
RESOURCE ALLOCATION AND POWER CONTROL METHOD
TRAINING ALGORITHM
EXPERIMENT AND EVALUATION
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.