Abstract

There is considerable variation in the size, shape and location of tumours, which makes it challenging for radiologists to diagnose breast cancer. Automated diagnosis of breast cancer from Contrast Enhanced Spectral Mammography (CESM) can support clinical decision making. However, existing methods fail to obtain an effective representation of the CESM and ignore the relationships between images. In this paper, we investigated for the first time a novel and flexible multimodal representation learning method, multi-feature deep information bottleneck (MDIB), for breast cancer classification in CESM. Specifically, the method incorporated an information bottleneck (IB)-based module to learn the prominent representation that provide concise input while informative for the classification. In addition, we creatively extended IB theory to multi-feature IB, which facilitates the learning of relevant features for classification between CESM images. To validate our method, experiments were conducted on our private and public datasets. The classification results of our method were also compared with those of state-of-the-art methods. The experiment results proved the effectiveness and the efficiency of the proposed method. We release our code at https://github.com/sjq5263/MDIB-for-CESM-classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call