The advancement of safety and reliability in Nuclear Power Plants (NPP) is essential for ensuring the protection of human life, the environment, and the sustainable use of clean energy. The key factor to achieve this goal is enhancing operator performance, as their decisions play a crucial role during reactor accidents. In this sense, the integration of artificial intelligence (AI) with safety systems in NPP could provide suggestions and recommendations to the operator, leading to a quick and accurate response. However, the complexity of models like Convolutional Neural Networks (CNN), often considered as black boxes, presents challenges for operators to comprehend the decision-making process. This difficulty leads to a lack of trust in the model's output. Therefore, ensuring algorithm transparency becomes crucial to maintain a positive relationship between humans and AI technology. In this study, the necessary data were collected through the automation technique implemented in the (PCTRAN) software. Subsequently, CNN was employed to classify various design-based accidents (DBA), accompanied by transfer learning to optimize accuracy. Finally, Shapley Additive Explanation (SHAP) was used to provide outcome explainability. The results show the effectiveness of automation in reducing time consumption and optimizing hardware resource utilization. Among the pre-trained models, MobileNet exhibits superior performance by requiring a smaller dataset and less time, achieving the highest accuracy, and demonstrating an optimal balance of (correct and incorrect) data prediction within the classes. Furthermore, the use of Explainable AI (XAI) provides result transparency and clarifies the reasons behind CNN incorrect predictions.