Abstract
Unmanned aerial vehicle (UAV)-aided continuous emergency communications have recently emerged as a key solution to provide data transmission for disaster areas, thanks to their flexible deployment and high mobility. In practice, due to the limited onboard energy and state deterioration, UAVs need energy supplement and maintenance. However, existing researches mainly focus on UAV deployment and rarely study policies related to their operations and maintenance. To ensure the continuous and reliable execution of communication tasks, a dynamic operations and maintenance policy is proposed to assign tasks and determine maintenance activities for UAVs. First, a dynamic operations and maintenance policy composed of a task assignment policy and a maintenance policy is proposed. Next, the dynamic operations and maintenance joint optimization problem is formulated as a Markov decision process (MDP) to optimize the performance of the UAV swarm, including coverage, fairness, operations and maintenance cost. Then, a deep reinforcement learning approach is tailored to optimize the proposed MDP, where the repeated states are eliminated by state preprocessing, and an action mask method is utilized to satisfy operational constraints. Finally, the proposed approach is tested by its application in the operations and maintenance of a UAV swarm for continuous emergency communication.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.