Abstract
AbstractIn device‐to‐device (D2D) communications, D2D users establish a direct link by utilizing the cellular users' spectrum to increase the network spectral efficiency. However, due to the higher priority of cellular users, interference imposed by D2D users to cellular ones should be controlled by channel and power allocation algorithms. Due to the unknown distribution of dynamic channel parameters, learning‐based resource allocation algorithms work more efficient than classic optimization methods. In this paper, the problem of the joint channel and power allocation for D2D users in realistic scenarios is formulated as an interactive learning problem, where the channel state information of selected channels is unknown to the decision center and learned during the allocation process. In order to achieve the maximum reward function by choosing an action (channel and power level) for each D2D pair, a recency‐based Q‐learning method is introduced to find the best channel‐power for each D2D pair. The proposed method is shown to achieve logarithmic regret function asymptotically, which makes it an order optimal policy, and it converges to the stable equilibrium solution. The simulation results confirm that the proposed method achieves better responses in terms of network sum rate and fairness criterion in comparison with conventional learning methods and random allocation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Transactions on Emerging Telecommunications Technologies
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.