Abstract

Motion primitives enable fast planning for complex and dynamic environments. Adversarial environments pose a particularly challenging and unpredictable scenario. Motion-primitive-based planners have the potential to provide benefit in these types of environments. The key challenge is to design a library of maneuvers that effectively capture the necessary capabilities of the vehicle. This work presents a primitive-based game tree search to solve adversarial games in continuous state and action spaces and applies a reinforcement learning framework to autonomously generate effective primitives for the given task. The results demonstrate the ability of the learning framework to produce maneuvers necessary for competing against adversaries. Furthermore, we propose a method for learning a model to estimate the state-dependent value of each motion primitives and demonstrate how to incorporate this model to increase planning performance under time constraints. Additionally, we compare our primitive-based algorithm against forward simulated methods from existing literature and highlight the benefits of motion primitives.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call