Abstract

Different breast cancer detection systems have been developed to help clinicians analyze screening mammograms. Breast cancer has been increasing gradually so scientists work to develop new methods to reduce the risks of this life-threatening disease. Convolutional Neural Networks (CNNs) have shown much promise In the field of medical imaging because of recent developments in deep learning. However, CNN’s based methods have been restricted due to the small size of the few public breast cancer datasets. This research has developed a new framework and introduced it to detect breast cancer. This framework utilizes Convolutional Neural Networks (CNNs) and image processing to achieve its goal because CNNs have been an important success in image recognition, reaching human performance. An efficient and fast CNN pre-trained object detector called RetinaNet has been used in this research. RetinaNet is an uncomplicated one-stage object detector. A two-stage transfer learning has been used with the selected detector to improve the performance. RetinaNet model is initially trained with a general-purpose dataset called COCO dataset. The transfer learning is then used to apply the RetinaNet model to another dataset of mammograms called the CBIS-DDSM dataset. Finally, the second transfer learning is used to test the RetinaNet model onto a small dataset of mammograms called the INbreast dataset. The results of the proposed two-stage transfer learning (RetinaNet → CBIS-DDSM → INbreast) are better than the other state-of-the-art methods on the public INbreast dataset. Furthermore, the True Positive Rate (TPR) is 0.99 ± 0.02 at 1.67 False Positives per Image (FPPI), which is better than the one-stage transfer learning with a TPR of 0.94 ± 0.02 at 1.67 FPPI.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call