Generating collision-free, time-efficient paths in an uncertain dynamic environment poses huge challenges for the formation control with collision avoidance (FCCA) problem in a leader-follower structure. In particular, the followers have to take both formation maintenance and collision avoidance into account simultaneously. Unfortunately, most of the existing works are simple combinations of methods dealing with the two problems separately. In this article, a new method based on deep reinforcement learning (RL) is proposed to solve the problem of FCCA. Especially, the learning-based policy is extended to the field of formation control, which involves a two-stage training framework: an imitation learning (IL) and later an RL. In the IL stage, a model-guided method consisting of a consensus theory-based formation controller and an optimal reciprocal collision avoidance strategy is designed to speed up training and increase efficiency. In the RL stage, a compound reward function is presented to guide the training. In addition, we design a formation-oriented network structure to perceive the environment. Long short-term memory is adopted to enable the network structure to perceive the information of obstacles of an uncertain number, and a transfer training approach is adopted to improve the generalization of the network in different scenarios. Numerous representative simulations are conducted, and our method is further deployed to an experimental platform based on a multiomnidirectional-wheeled car system. The effectiveness and practicability of our proposed method are validated through both the simulation and experiment results.
Read full abstract