Abstract

We present a novel method to address the problem of multi-vehicle conflict resolution in highly constrained spaces. An optimal control problem is formulated to incorporate nonlinear, non-holonomic vehicle dynamics and exact collision avoidance constraints. A solution to the problem can be obtained by first learning configuration strategies with reinforcement learning (RL) in a simplified discrete environment, and then using these strategies to generate new constraints and initial guesses for the original problem. Simulation results show that our method can explore efficient actions to resolve conflicts in confined space and generate dexterous maneuvers that are both collision-free and kinematically feasible.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call