Abstract

AbstractThe safety and robustness of deep neural networks (DNNs) are currently of great concern. Adequate testing is commonly an effective technique to ensure the software's trustworthiness. However, existing DNN testing methods generate many invalid test inputs, which inevitably brings increased computational overhead and reduces the efficiency of DNN testing. In this paper, we focus on testing task‐specific DNN and investigating diverse, valid and natural test input generation based on data augmentation techniques. Specifically, we propose AugTest, a DNN testing method based on stochastic optimization with momentum, searching for optimal compositions of data augmentation parameters to efficiently generate diverse and valid test inputs. Experimental results show that our proposed method can effectively explore the data manifold space and find valid test inputs with high diversity and naturalness. Compared with the best‐performing baseline, AugTest can generate more test inputs with more average diversity and less average time. Furthermore, the generated test inputs have competitive generalizability to DNNs with different structures. The test error rates exceed 70% when testing other DNN models performing similar tasks using the test inputs generated by AugTest. This implies that our method can produce more valid and generalized data to unveil DNNs' errors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call