Deep neural networks (DNNs) have become increasingly ubiquitous in our daily lives, finding applications in areas such as image recognition, voice recognition, and natural language processing. However, a growing concern revolves around the vulnerability of DNNs to adversarial examples—malicious inputs that can compromise the safety and accuracy of their outcomes. Existing studies predominantly fall into two categories: white-box and black-box techniques. White-box techniques require detailed internal information about the model, often focusing on gradient-based methods. In contrast, black-box techniques, which emulate real-world scenarios more closely, only rely on input–output knowledge. This study focuses specifically on black-box strategies, with key contributions from Single-Objective Variant of Differential Evolution (Pixel-SOO) and Multi-Objective Variant of Differential Evolution (Pixel-MOO) algorithms. While these approaches show promise, they suffer from drawbacks like long execution times and being unable to generate instances for adversarial purposes. To address these challenges, we introduce a novel archive-based Many Independent Objective (MIO) algorithm for using the first time in this context. Our proposed algorithm identifies the most vulnerable image pixels through the MIO algorithm, enabling efficient label flip attacks with minimized attempts. Furthermore, we balance exploration and exploitation by incorporating an adaptive parameter mechanism. The effectiveness of the proposed algorithm is assessed by employing VGG (VGG16 and VGG19) and ResNet (ResNet50, ResNet101, and ResNet152) architectures, both of which are convolutional neural network (CNN) models. The success criterion for the algorithms is to minimize the number of pixel changes while achieving high flip rates with a minimal number of requests to the network. A comprehensive analysis of the experimental results reveals that our algorithm consistently outperforms Pixel-SOO and Pixel-MOO, exhibiting an average speedup of three times compared to Pixel-SOO and seven times compared to Pixel-MOO. In most runs, adversarial attacks are generated with fewer pixel changes than Pixel-MOO and Pixel-SOO. In addition, our findings are openly accessible on GitHub to ensure transparency and reproducibility and encourage future research efforts.
Read full abstract