Abstract

Automatic breast image classification plays an important role in breast cancer diagnosis, and multi-modality image fusion may improve classification performance. However, existing fusion methods ignore relevant multi-modality information in favor of improving the discriminative ability of single-modality features. To improve classification performance, this paper proposes a multi-modality relation attention network with consistent regularization for breast tumor classification using diffusion-weighted imaging (DWI) and apparent dispersion coefficient (ADC) images. Within the proposed network, a novel multi-modality relation attention module improves the discriminative ability of single-modality features by exploring the correlation information between two modalities. In addition, a module ensures the classification consistency of ADC and DWI modality, thus improving robustness to noise. Experimental results on our database demonstrate that the proposed method is effective for breast tumor classification, and outperforms existing multi-modality fusion methods. The AUC, accuracy, specificity, and sensitivity are 85.1%, 86.7%, 83.3%, and 88.9% respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call