Abstract

Medical multi-modal retrieval aims to provide doctors with similar medical images from different modalities, which can greatly promote the efficiency and accuracy of clinical diagnosis. However, most existing medical retrieval methods hardly support the retrieval of multi-modal medical images, i.e., the number of modalities is greater than 2, and just convert retrieval to classification or clustering. It futilely breaks the gap between the visual information and the semantic information in different medical image modalities. To solve the problem, a S upervised C ontrast L earning method based on a M ultiple P seudo- S iamese network (SCL-MPS) is proposed for multi-modal medical image retrieval. In order to make the samples with semantic similarity close neighbors on Riemann manifold, the multiple constraints based on semantic consistency and modal invariance are designed in different forward stages of SCL-MPS. We theoretically demonstrate the feasibility of the designed constraints. Finally, experiments on four benchmark datasets (ADNI1, ADNI2, ADNI3, and OASIS3) show that SCL-MPS achieves state-of-the-art performance compared to 15 retrieval methods. Especially, SCL-MPS achieves a 100% mAP score in medical cross-modal retrieval on ADNI1.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.