Abstract

Full-field digital mammography (FFDM) and magnetic resonance imaging (MRI) are gold-standard techniques for breast cancer detection. The newly contrast-enhanced digital mammography (CEDM) integrates the complementary strengths of FFDM and MRI, and is being incorporated into the practice of leading institutions. The current clinical practice using CEDM is sub-optimal because it is primarily based on clinicians' trained eyes. Automated diagnostic systems under the conventional machine learning paradigm suffer from drawbacks, such as the requirement for precise segmentation, extraction of shallow features that do not suffice for diagnostic images, and adoption of a sequential design without a global objective. We propose a deep learning (DL)-empowered diagnostic system using CEDM, the core of which is a novel dual-mode deep transfer learning (D2TL) model. The proposed system is innovative in several aspects, including (1) a dual-mode deep architecture design; (2) use of transfer learning to facilitate robust model estimation under small sample-size; (3) development of visualization techniques to help interpret the model results and facilitate inter- and intra-tumor malignancy quantification; and (4) minimization of human bias. We apply D2TL to classify benign vs. malignant tumors using the CEDM data collected from the Mayo Clinic in Arizona. D2TL outperforms competing models and approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.