Abstract
Device-to-Device (D2D) communication with short communication distance is an efficient way to improve spectrum efficiency and mitigate interference. To realize the optimal resource configuration including wireless channel matching and power allocation, a distributed resource matching scheme is proposed based on deep reinforcement learning(DRL). The reward is defined as the difference of achieve rate of D2D users and the consumed power, which is limited by the Signal to Interference plus Noise Ratio (SINR) of the other cellular users on the current channel. The proposed algorithm maximizes the D2D throughput and energy efficiency in a distributed manner, without online coordination and message exchange between users. The considered resource allocation problem is formulated as a random non-cooperative game with multiple players (D2D pairs), where each player is a learning agent, whose task is to learn its best strategy based on locally observed information, multi-user communication resource matching algorithm is proposed based on a Double Deep Q-network (DDQN), where the total cellular throughput and user energy efficiency could converge to the Nash equilibrium (NE) under the mixed strategy. Simulation results show that the proposed algorithm can improve the communication rate and energy efficiency of each user by selecting the optimal strategy, and has better convergence performance compared with existing schemes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.