Abstract
The unmanned aerial vehicle (UAV) is prevalent in power inspection. However, due to a limited battery life, turbulent wind, and its motion, it brings some challenges. To address these problems, a reinforcement learning-based energy-saving path-planning algorithm (ESPP-RL) in a turbulent wind environment is proposed. The algorithm dynamically adjusts flight strategies for UAVs based on reinforcement learning to find the most energy-saving flight paths. Thus, the UAV can navigate and overcome real-world constraints in order to save energy. Firstly, an observation processing module is designed to combine battery energy consumption prediction with multi-target path planning. Then, the multi-target path-planning problem is decomposed into iterative, dynamically optimized single-target subproblems, which aim to derive the optimal discrete path solution for energy consumption prediction. Additionally, an adaptive path-planning reward function based on reinforcement learning is designed. Finally, a simulation scenario for a quadcopter UAV is set up in a 3-D turbulent wind environment. Several simulations show that the proposed algorithm can effectively resist the disturbance of turbulent wind and improve convergence.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.