Abstract
Recent research regarding the reliability of Deep Neural Networks (DNN) revealed that it is easy to produce images that are completely unrecognizable to humans, but DNNs recognize as classifiable objects with 99.99% confidence. The present study investigates the effect of search space reduction for Genetic Algorithms (GA) on their capability of purposefully fooling DNNs. Therefore, we introduce a GA with respective modifications that is able to fool neural networks trained to classify objects from well-known benchmark image data sets like GTSRB or MNIST. The developed GA is extended and thus capable of reducing the search space without changing its general behavior. Empirical results on MNIST indicate a significantly decreased number of generations needed to satisfy the targeted confidence of an MNIST image classifier (12 instead of 228 generations). Conducted experiments on GTSRB, a more challenging object classification scenario, show similar results. Therefore, fooling DNNs has found not only easily possible but can also be done very fast. Our study thus substantiates an already recognized, potential danger for DNN-based computer vision or object recognition applications.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have