Abstract
In this paper, we consider a reinforcement learning (RL) based multi-user downlink communication system. An actor-critic based deep channel prediction (CP) algorithm is proposed at the base station (BS) where the actor network directly outputs the predicted CSI without channel reciprocity. Different from the existing methods which either require the perfect channel state information (CSI), or estimate outdated CSI and set strict constraints on pilot sequences, the proposed algorithm has no such premised knowledge requirements or constraints. Deep-Q learning and policy gradient methods are adopted to update the parameters of the proposed prediction network, with the objective of maximizing the overall transmission sum rate. Numerical simulation results and the complexity analysis verify that the proposed CP algorithm could beat the existing traditional and learning based methods in terms of sum rate over different channel models and different numbers of users and antennas.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.