Contemporarily, adversarial attacks on deep learning models have garnered significant attention. With this in mind, this study delves into the effectiveness of adversarial attacks specifically targeted at the VGG16 model in the context of cat and dog image classification. Employing the Fast Gradient Sign Method (FGSM) for attack, the experimental findings reveal that, within a certain perturbation range, FGSM attacks can indeed reduce the model's average confidence, albeit with relatively minor impacts on accuracy. According to the analysis, the accuracy drops (decreased from 88.5% to 88.2%) is not significant, possibly due to limited classes. With small , perturbation results in a notable confidence drop. However, at higher , perturbation impact lessens, averaging around 50% confidence for cat and dog classes, indicating a 2-class scenario's upper limit in non-targeted FGSM attacks. Additionally, this research underscores the need for further exploration into various adversarial attack methods and model interpretability within the realm of image classification. Overall, these results shed light on guiding further exploration of adversarial attack defense strategies, holding significant potential for real-world applications in enhancing the robustness of AI systems against adversarial attacks.