Abstract

Physicians typically combine multi-modal data to make a graded diagnosis of breast tumors. However, most existing breast tumor grading methods rely solely on image information, resulting in limited accuracy in grading. This paper proposes a Multi-information Selection Aggregation Graph Convolutional Networks (MSA-GCN) for breast tumor grading. Firstly, to fully utilize phenotypic data reflecting the clinical and pathological characteristics of tumors, an automatic combination screening and weight encoder is proposed for phenotypic data, which can construct a population graph with improved structural information. Then, a graph structure is designed through similarity learning to reflect the correlation between patient image features. Finally, a multi-information selection aggregation mechanism is employed in the graph convolution model to extract the effective features of multi-modal data and enhance the classification performance of the model. The proposed method is evaluated on different clinical datasets from the Digital Database for Screening Mammography (DDSM) and INbreast. The average classification accuracies are 90.74% and 85.35%, respectively, surpassing the performance of existing methods. In conclusion, our method effectively fuses image and non-image information, leading to a significant improvement in the accuracy of breast tumor grading.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call