Abstract

This research study aims to evaluate the effectiveness of transfer learning with the ResNet50 model for classifying CT scan images of lungs as having cancer or not. Additionally, it explores the interpretability methods of LIME (Local Interpretable Model-Agnostic Explanations) and SHAP (SHapley Additive exPlanations) to provide explanations for the predictions made by the ResNet50 model on CT scan images. The research objectives include developing a deep learning model based on the ResNet50 architecture, evaluating its performance using various metrics, and explaining the predictions using LIME and SHAP techniques. The dataset consists of a collection of CT scan images of lungs, with labels indicating the presence or absence of cancer. Through k-fold cross-validation, the model achieves high accuracy and low loss, demonstrating its effectiveness in classifying lung cancer. The interpretability methods of LIME and SHAP shed light on the crucial features and regions in the CT scan images that contribute to the model's predictions, enhancing the understanding of the model's decision-making process. The results highlight the potential of transfer learning and interpretability techniques in improving the accuracy and explainability of lung cancer detection models. Future directions may involve applying the developed model to larger datasets, classifying different stages of cancer, and identifying the specific regions within the lungs where cancer cells are detected.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call