Abstract

Deep neural network (DNN) models, including those used in safety-critical domains, need to be thoroughly tested to ensure that they can reliably perform well in different scenarios. In this article, we provide an overview of structural coverage metrics for testing DNN models, including neuron coverage, k-multisection neuron coverage, top-k neuron coverage, neuron boundary coverage, strong neuron activation coverage and modified condition/decision coverage. We evaluate the metrics on realistic DNN models used for perception tasks (LeNet-1, LeNet-4, LeNet-5, ResNet20) including networks used in autonomy (TaxiNet). We also provide a tool, DNNCov, which can measure the testing coverage for all these metrics. DNNCov outputs an informative coverage report to enable researchers and practitioners to assess the adequacy of DNN testing, to compare different coverage measures, and to more conveniently inspect the model’s internals during testing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call