Abstract

This paper presents a reinforcement learning model designed to learn how to take cover on geo-specific terrains, an essential behavior component for military training simulations. Training of the models is performed on the Rapid Integration and Development Environment (RIDE) leveraging the Unity ML-Agents framework. We show that increasing the number of novel situations the agent is exposed to increases the performance on the test set. In addition, the trained models possess some ability to generalize across terrains, and it can also take less time to retrain an agent to a new terrain, if that terrain has a level of complexity less than or equal to the terrain it was previously trained on.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call