Abstract

Wireless communication networks of the future generations are expected to be inherently complex owing to the network architecture they can incorporate. With such networks, allocating and managing the available spectral frequencies and bandwidth to the users becomes a major challenge during deployment. As a result of this denser and diversified network infrastructure, efficient radio resource allocation and interference management is becoming ever more important, driving the hunt for new and more potent solutions. Also, for heterogeneous cellular networks with varied base station (BS) deployments and quality of service (QoS) needs, conventional heuristic-based resource allocation algorithms are prohibitively costly and undesirable. To address this problem, deep reinforcement learning (DRL) based frameworks can be utilized to attain the global optimality in dynamic resource allocation (DRA). In this work, we have considered the joint optimization problem of DRA for improving the power efficiency in multi-wireless networks. An approach to achieving near-optimal downlink power allocation for multi-cell wireless networks utilising deep Q-learning (DQL) to obtain an near-optimal power allocation policy is devised using the proposed deep learning-based resource allocation model with the primary objective of maximising the total network throughput. The learning algorithm may swiftly converge to the optimal policy under the distributed coordinated learning approach and deep-Q architecture. The simulation results reveal that the proposed learning algorithm performs better in comparison with the benchmark models implemented in this work and the network throughput achieved is closer to the baseline GA model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call