Abstract
To support dramatically increased traffic loads, communication networks become ultra-dense. Traditional cell association (CA) schemes are time-consuming, forcing researchers to seek fast schemes. This paper proposes a deep Q-learning based scheme, whose main idea is to train a deep neural network (DNN) to calculate the Q values of all the state-action pairs and the cell holding the maximum Q value is associated. In the training stage, the intelligent agent continuously generates samples through the trial-and-error method to train the DNN until convergence. In the application stage, state vectors of all the users are inputted to the trained DNN to quickly obtain a satisfied CA result of a scenario with the same BS locations and user distribution. Simulations demonstrate that the proposed scheme provides satisfied CA results in a computational time several orders of magnitudes shorter than traditional schemes. Meanwhile, performance metrics, such as capacity and fairness, can be guaranteed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.