Background/Objectives: Accurate volumetric assessment of lung nodules is an essential element of low-dose lung cancer screening programs. Current guidance recommends applying specific thresholds to measured nodule volume to make the following clinical decisions. In reality, however, CT scans often have heterogeneous slice thickness which is known to adversely impact the accuracy of nodule volume assessment. Methods: In this study, a deep learning (DL)-based 3D super-resolution method is proposed for generating thin-slice CT images from heterogeneous thick-slice CT images in lung cancer screening. We evaluated the performance in a qualitative way by radiologist’s perceptual assessment as well as in a quantitative way by accuracy of nodule volume measurements and agreement of volume-based Lung-RADS nodule category. Results: A 5-point Likert scale tabulated by two radiologists showed that the quality of DL-generated thin-slice images from thick-slice CT images were on a par with the image quality of ground truth thin-slice CT images. Furthermore, thick- and thin-slice CT images had a nodule volume difference of 52.2 percent on average which was reduced to a 15.7 percent difference with DL-generated thin-slice CT. In addition, the proposed method increased the agreement of lung nodule categorization using Lung-RADS by 74 percent. Conclusions: The proposed DL approach for slice thickness normalization has a potential for improving the accuracy of lung nodule volumetry and facilitating more reliable early lung nodule detection.
Read full abstract