Abstract

The urgent need of high-quality mobile services and improved user experience has driven to develop high-capacity mobile network system. To achieve this goal, ultradense network (UDN) has been widely considered as one of the promising solutions that significantly expand the network capacity. However, as the communication resources and the number of links per unit area are extremely densified within UDN, efficient and high-quality scheduling and resource allocation for UDN become challenging. Even worse, the complex environment of UDN limits the direct adoption of traditional allocation schemes to UDN. In this manuscript, the author is going to propose an efficient resource allocation algorithm for UDN based on deep reinforcement learning. First, the author presents the resource allocation strategies in cellular network server based on double Q-learning. Then, the author optimizes the algorithm by pruning redundant model weights and making a tradeoff between computational complexity and performance to meet the requirements of low latency and limited computing costs. The experiment and simulation results show that the pruning algorithm effectively reduces 50% model parameters. The UDN allocation performance is still acceptable as the proposed algorithms save up to 50% complexity.

Highlights

  • E upcoming Internet of ings (IoT) and future fifthgeneration (5G) mobile networks will encounter a halt unless we can rapidly increase the capacity of current communication system. erefore, a paradigm shift and enhancement in every aspect of current mobile network system are required to provide the immense amount of data traffic arising from high-resolution video applications [2]

  • The loss values of all models decrease as the training epochs increase. e dense model achieves the fastest convergence. e sparse model with sparsity 0.4 achieves comparable loss value as the dense model. is means that this sparse model is able to yield promising performance while reducing the weight number and computational complexity by around 60%. e convergence process of model with 60% sparsity is much slower than other models

  • We develop and propose the resource allocation algorithms for Ultradense network (UDN) based on deep reinforcement learning

Read more

Summary

System Model and Problem Formulation

We summarize the main points of this model according to [7] In this case, a total of N small base stations (SBSs) will be randomly distributed within the area of one macro cell. The probability that a total of n new SUEs will arrive in the micro cell, and this can be expressed as the following equation: P(n) λtτ􏼁ne−λtτ,. Once we have the metric SINR for each SUE set Un(t), the summation of overall downlink throughput at a specific time slot t within the micro cell n could be computed based on the following equation: TPn(t) 􏽘 TPUn(t),. Similar to [5], the SE is denoted as the ratio between the total throughput and the bandwidth of the given UDN at time slot t. We proposed a novel and efficient deep reinforcement learning algorithm

Proposed Efficient Resource Allocation Algorithms for UDN
Model Updating and Training
Experiment Platform and Parameters
20 MHz 80 1000
Simulation Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.