Abstract

Accurate segmentation of tubular structures, such as blood vessels and bile ducts, is pivotal for clinical diagnosis and subsequent treatment. However, challenges arise from their unique structure and morphology. These structures are thin and elongated, and often exhibit complex branching patterns, making precise delineation difficult. Additionally, the intricate and often noisy backgrounds in medical images complicate the extraction of detailed and shape features. These challenges collectively hinder the effectiveness of automated segmentation methods. To address these issues, this paper introduces the shape-supervised feature fusion U-Net (SFU-Net), a novel approach that extends the traditional U-Net architecture. Our network incorporates a feature fusion module that effectively combines different semantic information in both the encoder and decoder paths, facilitating the capture of detailed geometric representations of tubular structures. Furthermore, a novel distance-based tubularity loss is designed to capture boundary information and supervise segmentation integrity by imposing shape constraints. We conducted comprehensive experiments on three tubular structure segmentation tasks in medical imaging, including the dilated biliary tree, hepatic vessel, and pulmonary artery. The experimental results illustrate that the proposed SFU-Net achieves the best performance in terms of accuracy and completeness, with Dice scores of 76.14%, 80.42%, and 83.79%, and HD95 values of 8.59 mm, 6.43 mm, and 4.31 mm across three datasets, respectively. Compared to traditional U-Net-based segmentation methods and other tubular segmentation approaches, our method exhibits superior performance, underscoring its effectiveness in medical image segmentation with tubular structures.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.