Abstract

The fusion of mammography and ultrasound images helps to improve tumor classification accuracy. However, traditional fusion models ignore the correlation between these two modalities, resulting in limited performance improvement. To address this problem, a modality-correlation embedding model was proposed for breast tumor diagnosis by combining mammography and ultrasound imaging. By jointly optimizing the correlation between mammography and ultrasound and classification loss of individual modalities, two mappings are learned to project mammography and ultrasound from their original feature spaces into a common label space. A novel modality-correlation term is introduced to maintain the pairwise closeness of multimodal data in the common label space. Contrary to previous studies that did not consider the correlation between multimodal data, the proposed term can exploit the learned correlation information in the fusion process, which guarantees the consistency of the diagnosis results of multimodal images from the same patient. The proposed method was evaluated on our dataset, which contained ultrasound and mammography images from 73 patients. The area under the ROC curve, accuracy, sensitivity, specificity, positive predictive value, and negative predictive value were 95.83, 95, 91.67, 95.83, 95.83, and 88.89%, respectively. The experimental results also demonstrate that the proposed method outperforms traditional fusion methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call