Abstract

Acquiring adequate annotated data to train a deep convolutional neural network (CNN) is commonly challenging, especially when it comes to medical data. Manual annotation is tedious and time-consuming. To address this issue, we introduce a novel end-to-end trainable framework, which requires only four clicks, for interactive segmentation of breast lesions in ultrasound images. The interference induced by varying sizes and class imbalance problem brings huge challenges to precise segmentation, thus we propose an Region of Interest (RoI) focusing module to fix RoI features to a specific dimension and force the network to focus only on the lesions by discarding backgrounds. In addition, to fully and wisely utilize both RoI and global features, we introduce an RoI & Global re-calibration module, which re-weights each channel of the entire feature map and the RoI feature map before concatenation, such that regional and global information are utilized in a well-balanced way for more accurate segmentation. Experimental results report Intersection over Union (IoU) 89.33 ± 5.16%, Dice similarity coefficient (Dice) 94.28 ± 3.11% and pixel accuracy (PA) 99.25 ± 0.83% achieved by the proposed framework on an unseen test set with 120 cases (240 images) of breast lesion ultrasound images, showing the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call