Lung cancer is a malicious tumor that originates in the respiratory tissues when the cells lining the airways of the lungs proliferate and expand uncontrollably. While computed tomography (CT) scans are capable of illuminating problematic areas, they are not proficient at independently diagnosing lung tumors. The task of accurately determining the nodule distribution within the lung CT scans through automatic identification is challenging. As a consequence, the delineation of lung CT images facilitates the identification and classification of lung tumors. This study involved the acquisition of 1097 lung CT image datasets from the Iraq-Oncology Teaching Hospital/National Centre for Cancer Diseases (IQ-OTH/NCCD). In the initial stage of the work, lung CT images are segmented to find malignancies utilizing the U-Net and Attention Gate Residual U-Net (AGRes U-Net) algorithms. The Intersection over Union (IoU) and binary focal loss metrics are applied to evaluate the segmented lung images. AGRes U-Net achieves 97% greater accuracy than the standard U-Net architecture, as determined by performance evaluation. Subsequently, a YOLOv5 network is implemented to annotate the segmented lung CT images with lesions. Thus, the segmented lung CT outputs are labelled for tumors by the YOLOv5 network are fed into the VGG-19 architecture. With an accuracy of 94.8%, this VGG-19 framework distinguishes the lung tumors as normal, benign, and malignant. To compare the performance of the segmented lung output to that of the classified output by means of a convolutional neural network (CNN), the labelled lung CT images are fed as input to a CNN model in the second phase of the work. In conjunction with the Adaptive Moment Estimation (Adam) optimizer, the CNN model is implemented. The Adam optimizer demonstrates a 98% accuracy rate for the classified tumor outputs. The study illustrates that the accuracy of AGResU-Net segmentation for lung cancer detection and classification, at 97%, is nearly equivalent to that of the CNN model, which achieves 98% accuracy.
Read full abstract