Abstract

Automated semantic segmentation in breast ultrasound imaging remains a challenging task due to the adverse impacts of poor contrast, indistinct target boundaries, and a large number of shadows. Recently, convolutional neural networks (CNN) with U-shape have demonstrated considerable performance in medical image segmentation. However, classic U-shaped networks suffer from the potential semantic gaps due to the incompatibility of encoder and decoder features, thereby resulting in sub-optimal semantic segmentation performance in ultrasound imaging. In this work, we focus on improving the U-shaped CNN through adaptively reducing semantic gaps and enhancing contextual relationships between encoder and decoder features. Specifically, we propose two lightweight yet effective context refinement blocks including inverted residual pyramid block (IRPB) and context-aware fusion block (CFB). The former can selectively extract multi-scale semantic representations according to input features, aiming to adaptively reduce semantic gaps between encoder and decoder features. The latter can exploit semantic interactions of inter-features to enhance contextual correlations between the encoder and the decoder, aiming at improving the feature fusion scheme of low- and high-level features. Further, we develop a novel multi-level context refinement network (MCRNet) by seamlessly plugging these two context refinement blocks into an encoder-decoder architecture according to the multi-level manner, thereby achieving fully automated semantic segmentation in ultrasound imaging. In order to objectively validate the proposed method, we carry out extensive qualitative and quantitative analyses based on two publicly available breast ultrasound databases including BUSI and UDIAT. The experimental results greatly reflect the efficacy of our proposed method. Meanwhile, compared with nine state-of-the-art semantic segmentation methods, our proposed MCRNet also achieves superior performance while persevering fine computational efficiency.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.