Abstract

Modern modeling and simulation environments, such as commercial games or military training systems, frequently demand interactive agents that exhibit realistic and responsive behavior in accordance with a predetermined specification, such as a storyboard or military tactics document.Traditional methods for creating agents, such as state machines or behavior trees, necessitate a significant amount of effort for developing state representations and transition processes through manual knowledge engineering. On the other hand, newer techniques for behavior generation, such as deep reinforcement learning, require a vast amount of training data (centuries in many cases), and there is no guarantee that the generated behavior will align with intended objectives and courses of action. This paper examines the application of behavior cloning approaches in designing interactive agents. In our approach, users start by defining desired behavior through straightforward methods such as state machine models or behavior trees. Behavior cloning methods are then used to transform ground-truth trajectory data sampled from these models into differentiable policies that are further refined through engagement with interactive game environments. This method results in improvements in training results when compared on dimensions of task performance and stability of training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call