Adversarial samples deceive machine learning models through small but elaborate modifications that lead to erroneous outputs. The severity of the adversarial sample problem has come to the forefront with the widespread use of machine learning in areas such as security systems, autonomous driving, speech recognition, finance, and medical diagnostics. Malicious attackers can use adversarial samples to circumvent security detection systems, interfere with autonomous driving perception, mislead speech recognition, defraud financial systems, and even cause medical diagnosis errors. The emergence of adversarial samples exposes the vulnerability of existing models and poses challenges for information tracing and forensics after the incident. The main goal of current adversarial sample restoration methods is to improve model robustness. Traditional approaches focus only on improving the model’s classification accuracy, ignoring the importance of adversarial information, which is crucial for understanding the attack mechanism and strengthening future defenses. To address this issue, we propose an adversarial sample restoration method based on the similarity between clean and adversarial sample blocks to balance the needs of adversarial forensics and recognition accuracy. We implement the Fast Gradient Sign Method (FGSM), Basic Iterative Method (BIM), and Momentum Iterative Attack (MIA) attacks on MNIST, F-MNIST, and EMNIST datasets and perform experimental validation. The results demonstrate that our restoration method significantly enhances the model’s classification accuracy across various datasets and attack scenarios. Comparative analysis shows that the restored samples maintain a high similarity with the original adversarial samples, proving the method’s effectiveness. In addition, we performed performance tests on pre- and post-recovery samples. Taking the MNIST dataset as an example, we observed that the model performance metrics, such as MAPE, MAE, RMSE, and VAPE, of the restored samples improved by 88%, 88%, 65%, and 82%, respectively, after using the FGSM attack. This indicates that our restoration method successfully preserves the information of the generation mechanism of the adversarial samples and improves the model’s performance. This approach balances forensic capability and prediction accuracy, demonstrates a new direction in adversarial sample research, and substantially impacts security defense in practical applications.
Read full abstract