Abstract

Tracking space noncooperative targets, including disabled and mobile spacecrafts, remains a challenging problem. This article develops two reinforcement-learning-based parameter-self-tuning controllers for the following two different tracking cases: case A, tracking a disabled target, and case B, tracking a mobile target. An adaptive controller consisting of five model uncertainties is adopted for case A, and a modified PD controller is derived for case B. The actor–critic framework is employed to reduce the initial control accelerations for case A and to improve the terminal tracking accuracy for case B. Relations between control parameters and tracking errors are found through the fuzzy inference system. Finally, the reinforcement learning is used to select suitable control parameters for achieving desired purposes. Numerical experimental results validate the effectiveness of the proposed algorithms on reducing initial control accelerations for case A and improving the terminal tracking accuracy for case B.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call