Abstract

An adaptive and robust control is imperative for robots operating in complex environments. The artificial intelligence approach presents a promising solution, while training and practical application in an actual robot remain a significant challenge. This paper proposes a reinforcement learning based control strategy with lightweight computation, aiming to optimize thrust in complex flows and varying self-structural characteristics of robotic fishtails. Improved Q-learning algorithm and CPG control are utilized to enable the robotic fishtail to make autonomous and accurate decisions in response to unknown changes. The control strategy is trained in actual physical flow fields, with an action selection strategy and a reward system proposed to accelerate convergence and enhance training stability. An integrated system of online learning, response measuring, and real-time monitoring is developed for the training and testing processes. Variable environmental tests are performed under different turbulent flows. Distinct caudal fins are utilized to conduct tests on self-structural variation. The experimental results confirm the robustness and adaptability of the control strategy, as well as its superiority over the PID approach in terms of accuracy, stability, and response speed. The control strategy architecture and physical experimental method could offer experience and application reference for the intelligent control of actual robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call