Abstract

Unmanned Air Vehicles (UAVs), i.e. drones, have become a key enabler technology of many reconnaissance applications in different fields, such as military, maritime, and transportation. UAVs offer several benefits, such as affordability and flexibility in deployment. However, their limited flight time due to energy consumption is one of the key limitations. Therefore, it is crucial to ensure that UAVs can complete the mission while consuming the least energy possible. In this paper, we propose a novel framework for UAV smart navigation to minimize the time and energy of planning mobile targets visitation. We develop a Deep Reinforcement Learning (DRL) approach to allow the drone to learn the targets’ mobility pattern and build its least energy scanning strategy accordingly. We conduct an initial evaluation of the system and our proposed DRL model policy using simulation. Then, to overcome the time-consuming exploration phase of DRL, we develop a Digital Twin (DT) environment of 3D physics-based simulator, which can be used to train the DRL agent efficiently. We also developed a testbed based on hardware integration with the parrot ANAFI drone to verify the feasibility of the proposed methodology. Our findings confirm that the DRL-based agent can achieve performance close to that of a benchmark policy. Moreover, the testbed experiment validates the practicality of utilizing the DT environment for DRL exploration.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.