Abstract

It is often difficult to automatically segment lung tumors due to the large tumor size variation ranging from less than 1 cm to greater than 7 cm depending on the T-stage. This study aims to accurately segment lung tumors of various sizes using a consistency learning-based multi-scale dual-attention network (CL-MSDA-Net). To avoid under- and over-segmentation caused by different ratios of lung tumors and surrounding structures in the input patch according to the size of the lung tumor, a size-invariant patch is generated by normalizing the ratio to the average size of the lung tumors used for the training. Two input patches, a size-invariant patch and size-variant patch are trained on a consistency learning-based network consisting of dual branches that share weights to generate a similar output for each branch with consistency loss. The network of each branch has a multi-scale dual-attention module that learns image features of different scales and uses channel and spatial attention to enhance the scale-attention ability to segment lung tumors of different sizes. In experiments with hospital datasets, CL-MSDA-Net showed an F1-score of 80.49%, recall of 79.06%, and precision of 86.78%. This resulted in 3.91%, 3.38%, and 2.95% higher F1-scores than the results of U-Net, U-Net with a multi-scale module, and U-Net with a multi-scale dual-attention module, respectively. In experiments with the NSCLC-Radiomics datasets, CL-MSDA-Net showed an F1-score of 71.7%, recall of 68.24%, and precision of 79.33%. This resulted in 3.66%, 3.38%, and 3.13% higher F1-scores than the results of U-Net, U-Net with a multi-scale module, and U-Net with a multi-scale dual-attention module, respectively. CL-MSDA-Net improves the segmentation performance on average for tumors of all sizes with significant improvements especially for small sized tumors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call