Evolutionary search (ES)-based techniques are commonly used for testing autonomous robotic systems. However, these approaches often rely on computationally expensive simulator-based models for test scenario evaluation. To improve the computational efficiency of the search-based testing, we propose augmenting the ES with a reinforcement learning (RL) agent trained using surrogate rewards derived from domain knowledge. In our approach, known as RIGAA (Reinforcement learning Informed Genetic Algorithm for Autonomous systems testing), we first train an RL agent to learn useful constraints of the problem and then use it to produce a certain part of the initial population of the search algorithm. By incorporating an RL agent into the search process, we aim to guide the algorithm towards promising regions of the search space from the start, enabling more efficient exploration of the solution space. We evaluate RIGAA on two case studies: maze generation for an autonomous “Ant” robot and road topology generation for an autonomous vehicle lane-keeping assist system. In both case studies, RIGAA reveals more failures of a high level of diversity than the compared baselines. RIGAA also outperforms the state-of-the-art tools for vehicle lane-keeping assist system testing, such as AmbieGen, CRAG, WOGAN, and Frenetic in terms of the number of revealed failures in a two-hour budget.
Read full abstract