Abstract

Deep learning (DL) has been widely adopted in many safety-critical scenarios. Deep neural networks (DNNs) usually play the core part in these DL systems. Existing studies have shown that DNNs can suffer from various vulnerabilities, and cause severe consequences. To improve the testing adequacy of DNNs, researchers have proposed several coverage criteria, e.g., neuron coverage in DeepXplore. The prediction result of a DNN is jointly determined by the outputs of neurons and the connection weights that they connect into next-level neurons. However, existing coverage criteria use only the output of a neuron to determine the activation state of the neuron and ignore the connection weights it emits.In this paper, we propose DeepCon, a novel contribution coverage. In DeepCon, we define a term contribution as the combination of the output of a neuron and the connection weight it emits, and use the contribution coverage to gauge the testing adequacy of DNNs. DeepCon can thoroughly cover both neurons and the connection weights they emit and can scale well to large DNNs. We further propose a contribution coverage guided test generation approach, DeepCon-Gen, which can automatically generate tests and activate inactivated contributions of DNNs. We evaluate DeepCon and DeepCon-Gen on five different DNNs over two popular datasets. The experimental results show that DeepCon can well present the testing adequacy of these DNNs. DeepCon-Gen can effectively activate the inactivated contributions, and 62.6% of the generated tests can lead to mispredictions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call