Abstract

In this paper, an adversarial deep reinforcement learning-based control method is proposed to address the issue of robust depth tracking of an underactuated autonomous underwater vehicle in the presence of intrinsic coupled dynamics and external disturbances. First, long-short-term-memory neural network is presented to memorize and predict the changes in the state of vehicle, and a cascaded multilayer perception projects the output into action space of vehicle. Subsequently, adversarial deep reinforcement learning scheme is applied to the training of control agent by introducing an adversary which counteracts the control behavior, whereby the agent is enabled to learn the control strategy in different distributions of state transition. For evaluation of the performance, a control agent is pre-trained in simulation environment based on the reliable digital model of a real vehicle, and the simulation environment is paced at one iteration per second to align with real-time operations to ensure the portability of training result. Furthermore, the training cost is also extremely reduced. Finally, experiments are conducted with time-varying disturbances to further prove the feasibility of the proposed learning-based control scheme on a prototype of underwater vehicle in towing tank. Moreover, comparative experiment results show the better robustness performance of the learning-based control agent than that of classic line-of-sight based proportional–integral–derivative and adaptive line-of-sight based proportional–integral–derivative controllers in different scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call