Abstract

Due to the limited ability of a single unmanned aerial vehicle (UAV), group unmanned aerial vehicles (UAVs) have attracted more attention in communication and radar fields. The use of an integrated sensing and communication (ISAC) system can make communication and radar modules share a radar module’s resources, coupled with efficient resource allocation methods. It can effectively solve the problem of inadequate UAV resources and the low utilization rate of resources. In this paper, the resource allocation problem is addressed for group UAVs to achieve a trade-off between the detection and communication performance, where the ISAC system is equipped in group UAVs. The resource allocation problem is described by an optimization problem, but with group UAVs, the problem is complex and cannot be solved efficiently. Compared with the traditional resource allocation scheme, which needs a lot of calculation or sample set problems, a novel reinforcement-learning-based method is proposed. We formulate a new reward function by combining mutual information (MI) and the communication rate (CR). The MI describes the radar detection performance, and the CR is for wireless communication. Simulation results show that compared with the traditional Kuhn Munkres (KM) or the deep neural network (DNN) methods, this method has better performance with the increase in problem complexity. Additionally, the execution time of this scheme is close to that of the DNN scheme, and it is better than the KM algorithm.

Highlights

  • In recent years, due to the ability limitation of a single unmanned aerial vehicle (UAV), group unmanned aerial vehicles (UAVs) have been proposed for complex applications

  • For the flexible mobilization of the integrated sensing and communication (ISAC) system, the traditional fixed resource allocation can no longer satisfy the effective allocation of resources according to the real-time situation, resulting in the low utilization of resources

  • In order to solve this problem, this paper has firstly summarized and analyzed the resource allocation technology of the ISAC system and introduced the related resource allocation technology

Read more

Summary

Introduction

Due to the ability limitation of a single unmanned aerial vehicle (UAV), group UAVs have been proposed for complex applications. Policy-based reinforcement learning and action-critical deep deterministic policy gradient algorithms are used to allocate the energy resources of cellular network reasonably [34], and the resource allocation method for radar detection [35] is given. [37] studied the vehicle spectrum sharing problem based on multi-agent reinforcement learning to solve the spectrum and power distribution problem in the scene where the channel conditions in vehicle-mounted network change rapidly and CSI cannot be accurately obtained. [38] proposed a communication resource allocation method based on deep reinforcement learning to ensure the reliability and delay constraint of ultra-high reliability and low delay communication service on the internet of vehicles. A novel reinforcement-learning-based method is proposed to solve the complex problem, where we formulate a new reward function by combining both the MI and the CR. Notations: lower-case boldface letters denote vectors. ξ {·} is the nomination operation

Groups UAVs Resource Allocation Model for ISAC System
Reinforcement-Learning-Based UAVs Resource Allocation Method
Simulation Results
Summary and Prospect

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.