Abstract

Breast mass segmentation in mammograms remains a crucial yet challenging topic in computer-aided diagnosis systems. Existing algorithms mainly used mass-centered patches to achieve mass segmentation, which is time-consuming and unstable in clinical diagnosis. Therefore, we aim to directly perform fully automated mass segmentation in whole mammograms with deep learning solutions. In this work, we propose a novel dual contextual affinity network (a.k.a., DCANet) for mass segmentation in whole mammograms. Based on the encoder-decoder structure, two lightweight yet effective contextual affinity modules including the global-guided affinity module (GAM) and the local-guided affinity module (LAM) are proposed. The former aggregates the features integrated by all positions and captures long-range contextual dependencies, aiming to enhance the feature representations of homogeneous regions. The latter emphasizes semantic information around each position and exploits contextual affinity based on the local field-of-view, aiming to improve the indistinction among heterogeneous regions. The proposed DCANet is greatly demonstrated on two public mammographic databases including the DDSM and the INbreast, achieving the Dice similarity coefficient (DSC) of 85.95% and 84.65%, respectively. Both segmentation performance and computational efficiency outperform the current state-of-the-art methods. According to extensive qualitative and quantitative analyses, we believe that the proposed fully automated approach has sufficient robustness to provide fast and accurate diagnoses for possible clinical breast mass segmentation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.