Abstract

With the wide application of radiography, massive data of chest X-ray and the associated radiology reports have been accumulated. Cross-modal retrieval between the chest X-ray and radiology reports is useful for the level of practicing radiologists and substantial other medical settings. However, existing cross-modal retrieval methods, which are mainly designed for the natural images, are not suitable for the chest X-ray images and radiology reports. In this paper, we propose a category supervised cross-modal hashing retrieval between chest X-ray and radiology reports, which learns the cross-modal similarity using a category supervised hashing network and union hashing network. Specifically, we design a category hashing network to learn hash code for each category, then use the learned category hash code as supervised information to guide the learning of images modality hash, texts modality hash. On the other hand, we propose the union hashing network to learn the correlation between two different modalities. Comprehensive experiments have been done on the public dataset MIMIC-CXR and the results show that the proposed method is on average 6.62% better than the traditional shallow method, achieved an increase of 0.57% compared to the deep cross-modal hashing (DCMH) in term of mAP. The ablation study is also implemented and the results demonstrates effectiveness of the proposed method.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.