Abstract

AbstractIntelligent reflective surface (IRS) provides an effective solution for reconfiguring air‐to‐ground wireless channels, and intelligent agents based on reinforcement learning can dynamically adjust the reflection coefficient of IRS to adapt to changing channels. However, most exiting IRS configuration schemes based on reinforcement learning require long training time and are difficult to be industrially deployed. This paper, proposes a model‐free IRS control scheme based on reinforcement learning and adopts transfer learning to accelerate the training process. A knowledge base of the source tasks has been constructed for transfer learning, allowing accumulation of experience from different source tasks. To mitigate potential negative effects of transfer learning, quantitative analysis of task similarity through unmanned aerial vehicle (UAV) flight path is conducted. After identifying the most similar source task to the target task, parameters of the source task model are used as the initial values for the target task model to accelerate the convergence process of reinforcement learning. Simulation results demonstrate that the proposed method can increase the convergence speed of the traditional DDQN algorithm by up to 60%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.