Objective. Honeycomb lung is a rare but severe disease characterized by honeycomb-like imaging features and distinct radiological characteristics. Therefore, this study aims to develop a deep-learning model capable of segmenting honeycomb lung lesions from Computed Tomography (CT) scans to address the efficacy issue of honeycomb lung segmentation. Methods. This study proposes a sparse mapping-based graph representation segmentation network (SM-GRSNet). SM-GRSNet integrates an attention affinity mechanism to effectively filter redundant features at a coarse-grained region level. The attention encoder generated by this mechanism specifically focuses on the lesion area. Additionally, we introduce a graph representation module based on sparse links in SM-GRSNet. Subsequently, graph representation operations are performed on the sparse graph, yielding detailed lesion segmentation results. Finally, we construct a pyramid-structured cascaded decoder in SM-GRSNet, which combines features from the sparse link-based graph representation modules and attention encoders to generate the final segmentation mask. Results. Experimental results demonstrate that the proposed SM-GRSNet achieves state-of-the-art performance on a dataset comprising 7170 honeycomb lung CT images. Our model attains the highest IOU (87.62%), Dice(93.41%). Furthermore, our model also achieves the lowest HD95 (6.95) and ASD (2.47). Significance. The SM-GRSNet method proposed in this paper can be used for automatic segmentation of honeycomb lung CT images, which enhances the segmentation performance of Honeycomb lung lesions under small sample datasets. It will help doctors with early screening, accurate diagnosis, and customized treatment. This method maintains a high correlation and consistency between the automatic segmentation results and the expert manual segmentation results. Accurate automatic segmentation of the honeycomb lung lesion area is clinically important.
Read full abstract