Abstract

This article investigates an adaptive, distributed, and cooperative control strategy for the problem of spacecraft swarm reconfiguration, which involves assembling the spacecraft at a close distance to one another while avoiding collisions and keeping spacecraft far from an obstacle. The key idea is to transform their opposite indices into equivalent ones by using soft and hard constraints. The proposed control strategy is inspired by the actor–critic framework of reinforcement learning (RL) algorithms: The soft constraint is designed by using a critic neural network (NN) for assembling and avoiding obstacles, while collisions among the spacecraft are prevented based on the hard constraint established in an artificial potential field (APF). By drawing support from this idea of equivalent transformation, the adaptive, distributed, and cooperative controller is devised by using an actor NN of the RL algorithm, an APF, and Backstepping control technology. The action NNs are used to estimate the input signals of the desired control and the undesired effects due to disturbance from the APF, and the expected control performance is then obtained by minimizing the output of the critic NN. The computational burden incurred by the NNs is significantly reduced by reducing the number of parameters that need to be learned by NNs. Lyapunov stability theory is used to guarantee that all signals in this closed-loop system are ultimately uniformly bounded to ensure its stability. The results of simulations of a swarm of spacecraft demonstrated the effectiveness of the proposed control strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call