Abstract

<p>Mutation testing has been deemed an effective way to ensure Deep Learning (DL) software quality. Due to the requirements of generating and executing mass mutants, mutation testing suffers low-efficiency problems. In regard to traditional software, mutation operators that are hard to cause program logic changes can be reduced. Thus, the number of the mutants, as well as their executions, can be effectively decreased. However, DL software relies on model logic to make a decision. Decision boundaries characterize its logic. In this paper, we propose a DL software mutation operator reduction technique. Specifically, for each group of DL operators, we propose and use DocEntropy to measure the model’s decision boundary changes among mutants generated and the original model. Then, we select the operator group with the highest entropy value and use the involved operators for further mutation testing. An empirical study on two DL models verified that the proposed approach could lead to cost-effective DL software mutation testing (i.e., 33.61% mutants and their executions decreased on average) and archive more accuracy mutation scores (i.e., 9.45% accuracy increased on average).</p> <p> </p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call