Abstract

Research into intelligent motion planning methods has been driven by the growing autonomy of autonomous underwater vehicles (AUV) in complex unknown environments. Deep reinforcement learning (DRL) algorithms with actor-critic structures are optimal adaptive solutions that render online solutions for completely unknown systems. The present study proposes an adaptive motion planning and obstacle avoidance technique based on deep reinforcement learning for an AUV. The research employs a twin-delayed deep deterministic policy algorithm, which is suitable for Markov processes with continuous actions. Environmental observations are the vehicle's sensor navigation information. Motion planning is carried out without having any knowledge of the environment. A comprehensive reward function has been developed for control purposes. The proposed system is robust to the disturbances caused by ocean currents. The simulation results show that the motion planning system can precisely guide an AUV with six-degrees-of-freedom dynamics towards the target. In addition, the intelligent agent has appropriate generalization power.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.