Abstract

Reinforcement Learning (RL), one of the most active research areas in artificial intelligence, focuses on goal-directed learning from interaction with an uncertain environment. RL systems play an increasingly important role in many aspects. Therefore, its safety issues have received more and more attention. Testing has achieved great success in ensuring safety of traditional software systems. However, traditional testing approaches hardly consider RL systems. To fill this gap, we propose a novel mutation testing framework specialized for RL systems. We propose a series of mutation operators simulating possible errors that RL systems may encounter, and show how to make comprehensive mutations of RL systems with these operators. Furthermore, test environments are provided to reveal possible problems within RL systems. The mutation testing technique can be helpful in the construction of RL systems, and mutation scores specialized for RL systems are used to analyze the extent of potential faults and evaluate the quality of test environments. Our evaluation in three popular environments, namely FrozenLake, CartPole, and MountainCar demonstrates the practicability of the proposed techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call