Abstract

Segmenting breast tumors in ultrasonography is challenging due to the low image quality and presence of artifacts. Radiologists' studying and diagnosis skills are integrated with artificial intelligence to establish a clinical learning-based deep learning network in order to robustly extract and delineate features of breast fibroadenoma. The spatial local feature contrast (SLFC) module captures overall tumor contours, while the channel recursive gated attention (CRGA) module enhances edge perception through high-dimensional information interaction. Additionally, full-scale feature fusion and enhanced deep supervision are applied to improve model stability and performance. To achieve smoother boundaries, we introduce a new loss function (cosh-smooth) that penalizes and finely tunes tumor edges. Our dataset comprises 1016 clinical ultrasound images of breast fibroadenoma with labeled masks, alongside a publicly available dataset of 246 ones. Segmentation performance is evaluated using the Dice similarity coefficient (DSC) and mean intersection over union (MIOU). Extensive experiments demonstrate that our proposed MS-CFNet outperforms state-of-the-art methods. Compared to TransUNet as a baseline model, MS-CFNet improves by 1.47% in DSC and 2.56% in MIOU. The promising result of MS-CFNet is attributed to the integration of radiologists' clinical diagnosis procedure and the bionic mindset, enhancing the network's ability to recognize and segment breast fibroadenomas effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call