Abstract

One of the most common breast cancer mammographic manifestation is solid mass. If the information obtained from mammography is inadequate, complementary modalities such as ultrasound imaging are used to achieve additional information. Although interest in the combination of information from different modalities is increasing, it is an extremely challenging task. In this regard, a computer-aided diagnosis (CAD) system can be an efficient solution to overcome these difficulties. However, most of the studies have focused on the development of mono-modal CAD systems, and a few existing bimodal ones rely on the extracted hand-crafted features of mammograms and sonograms. In order to meet these challenges, this paper proposes a novel bimodal deep residual learning model. It consists of the following major steps. First, the informative representation for each input image is separately constructed. Second, in order to construct the high-level joint representation of every two input images and effectively explore complementary information among them, the representation layers of them are fused. Third, all of these joint representations are fused to obtain the final common representation of the input images for the mass. Finally, the recognition result is obtained based on information extracted from all input images. The augmentation strategy was applied to enlarge the collected dataset for this study. Best recognition results on the sensitivity, specificity, F1-score, area under ROC curve, and accuracy metrics of 0.898, 0.938, 0.916, 0.964, and 0.917, respectively, are achieved by our model. Extensive experiments demonstrate the effectiveness and superiority of the proposed model over other state-of-the-art models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call