Abstract
Accurate segmentation of multiple targets, such as ribs, clavicles, heart, and lung fields, from chest X-ray images is crucial for diagnosing various lung diseases. Currently, mainstream deep learning methods heavily rely on fully annotated large-scale datasets. However, annotating all objects in chest X-ray images is a labor-intensive and time-consuming task. The publicly available partially annotated chest X-ray datasets have varying annotation objects and standards, and they are seldom utilized comprehensively in existing studies. To address these challenges, we propose A Multi-objective Segmentation Method for chest X-rays based on Collaborative Learning from Multiple Partially Annotated Datasets (called: MSM-CLMPAD). Our approach first utilizes an encoder constructed with densely connected blocks to extract multi-scale features from multiple partially annotated datasets. Then, leveraging the overlapping relationships among segmentation targets, we design a dual decoder guided by the attention mechanism. The novel attention-guided decoder effectively disentangles the features corresponding to various targets. Importantly, we propose an alternating training strategy for different datasets, facilitating collaborative learning of the same network model across datasets with diverse annotation targets. The experimental results performing on four public datasets demonstrate that our method achieves superior Dice and Jaccard coefficients compared to other popular methods, particularly for overlapping targets and unclear regions. We also explore the mutual influence of different targets in chest X-rays, offering a solution for interaction among partial datasets and further alleviating the difficulties of data annotation for multi-organ segmentation tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
More From: Information Fusion
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.