Abstract

Adversarial examples cause misclassifications of deep learning (DL) systems. It isn’t easy to debug misclassifications due to the intrinsic complexity of DL architecture. Thus, applying coverage-guided fuzzing, a widely used technique to find crashes in complex software, is promising. However, mutation strategies of DL fuzzers, such as DeepHunter, have a limitation. They restrict multiple transformations and so penalize generating diverse inputs. Otherwise, they cause significant distortion, rendering invalid inputs, such as unrecognizable or ambiguous to humans. However, multiple transformations are critical in mutation-based fuzzing. To address this problem, we propose the mixed and constrained mutation (MCM) for DL fuzzers. Human perception-based constraints of MCM avoid significant distortion in a single transformation and the aggregation of multiple transformations. We verify transformation parameters through a survey with 15 participants on each MNIST, STL-10, and ImageNet dataset to implement such constraints, followed by statistical tests. MCM returns valid inputs in almost every fuzzing iteration. Furthermore, MCM improved the fuzzing performance on various DL architectures on MNIST, STL-10, and ImageNet compared to DeepHunter: MCM discovered 17.6% more seeds showing new coverage and 132% more adversarial examples on average. These adversarial examples correspond to more than double the incorrect classes for each original image than DeepHunter.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call