Abstract

Honeycomb lung is a radiological manifestation of various lung diseases, seriously threatening patients’ lives worldwide. In clinical practice, the precise localization of lesions and assessment of their severity are crucial. However, accurate segmentation and grading are challenging for physicians due to the heavy annotation burden and diversity of honeycomb lungs. In this paper, we propose a multitask learning architecture for semi-supervised segmentation and grading diagnosis to achieve automatic localization and assessment of lesions. To the best of our knowledge, this is the first method that integrates a grading diagnosis task into honeycomb lung semi-supervised segmentation. Firstly, we adapt cross-learning to capture local features and long-range dependencies from the CNN and transformer. Secondly, considering the diversity of honeycomb lung lesions, the shape-edge aware constraint is designed to assist the model in locating lesions. Then, in order to better understand the different levels of information in the images, we develop global contrast and local contrast learning to enhance the model’s learning of semantic-level and pixel-level features. Lastly, aiming to improve the diagnostic accuracy, we propose a gradient thresholding algorithm to integrate the segmentation predictions into the grading diagnosis network. The experiment’s results based on the in-house honeycomb lung dataset demonstrate the superiority of our method. Compared to other methods, our approach achieves a state-of-the-art performance. In particular, in external data testing, our predictions are consistent with physicians in the majority of cases. In addition, the segmentation results based on the public Kvasir-SEG dataset also indicate that our method has good generalization ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call