Abstract

Rib segmentation in 2D chest x-ray images is a crucial and challenging task. On one hand, chest x-ray images serve as the most prevalent form of medical imaging due to their convenience, affordability, and minimal radiation exposure. However, on the other hand, these images present intricate challenges including overlapping anatomical structures, substantial noise and artifacts, inherent anatomical complexity. Currently, most methods employ deep convolutional networks for rib segmentation, necessitating an extensive quantity of accurately labeled data for effective training. Nonetheless, achieving precise pixel-level labeling in chest x-ray images presents a notable difficulty. Additionally, many methods neglect the challenge of predicting fractured results and subsequent post-processing difficulties. In contrast, CT images benefit from being able to directly label as the 3D structure and patterns of organs or tissues. In this paper, we redesign rib segmentation task for chest x-ray images and propose a concise and efficient cross-modal method based on unsupervised domain adaptation with centerline loss function to prevent result discontinuity and address rigorous post-processing. We utilize digital reconstruction radiography images and the labels generated from 3D CT images to guide rib segmentation on unlabeled 2D chest x-ray images. Remarkably, our model achieved a higher dice score on the test samples and the results are highly interpretable, without requiring any annotated rib markings on chest x-ray images. Our code and demo will be released in ‘https://github.com/jialin-zhao/RibsegBasedonUDA’.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.