Abstract

Automatic segmentation and localization of lesions in mammogram (MG) images are challenging problems even with employing advanced methods such as deep learning (DL) methods [1]–[3]. To address these challenges, we propose to use a U-Net approach to automatically detect and segment lesions in MG images. U-Net [4] is an end-to-end convolutional neural network (CNN) based model that has achieved remarkable results in segmenting bio-medical images [5]. We modified the architecture of the U-Net model to maximize its precision such as using batch normalization, adding dropout, and data augmentations. The proposed U-Net model predicts a pixel-wise segmentation map of an input full MG image in an efficient way due to its architecture. These pixel-wise segmentation maps help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. The main challenge that most DL methods face in mammography is the need for large annotated training data-sets. To train such DL networks without over-fitting, these networks need thousands or millions of training MG images [1], [3], [5]. In contrast, U-Net is capable of learning from a relatively small training data-set compared to other DL methods [4]. We used publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and MG images from the University of Connecticut Health Center (UCHC) to train the proposed U-Net model [3]. The proposed U-Net method is trained on MG images that have mass lesions of different sizes, shapes, margins, and intensity variation around mass boundaries. All the training MG images containing suspicious areas are accompanied by associated pixel-level ground truth maps (GTMs) which indicate the background and breast lesion labels for each pixel. A total of 2066 MG images and their corresponding segmentation GTMs are used to train the proposed U-Net model. Moreover, we applied the adaptive median filter (AMF) and the contrast limited adaptive histogram equalization (CLAHE) filter to the training MG images to enhance its characteristics and improve the performance of the downstream analysis [3].We compared the efficiency of our model with those of the state-of-the-art Faster R-CNN model [6] and the region growing (RG) model [7]. We tested our proposed U-Net method using film-based and fully digitized MG images. The proposed U-Net model shows slightly better performance in detecting true segments compared to the Faster R-CNN model but outperforms it significantly in term of runtime. In addition, the proposed U-Net model gives precise segments of the lesions in the MG images. In contrast, the Faster R-CNN method gives bounding boxes surrounding the lesions. Moreover, the proposed U-Net method performs superior compared to the RG model. Data augmentation has been very effective in our experiments, resulting in an increase in the Dice similarity coefficient from 0.918 to 0.983, between the GTMs and the segmented lesions maps. Also, the proposed model yielded an Intersection over Union (IoU) of 0.974 compared to IoU of 0.966 from the state-of-the-art Faster R-CNN model. In conclusion, the performance of the proposed DL model show promises to make its practical application possible for clinical applications to assist radiologists.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.