Abstract

With the wide application of radiography, massive data of chest X-ray and the associated radiology reports have been accumulated. Cross-modal retrieval between the chest X-ray and radiology reports is useful for the level of practicing radiologists and substantial other medical settings. However, existing cross-modal retrieval methods, which are mainly designed for the natural images, are not suitable for the chest X-ray images and radiology reports. In this paper, we propose a category supervised cross-modal hashing retrieval between chest X-ray and radiology reports, which learns the cross-modal similarity using a category supervised hashing network and union hashing network. Specifically, we design a category hashing network to learn hash code for each category, then use the learned category hash code as supervised information to guide the learning of images modality hash, texts modality hash. On the other hand, we propose the union hashing network to learn the correlation between two different modalities. Comprehensive experiments have been done on the public dataset MIMIC-CXR and the results show that the proposed method is on average 6.62% better than the traditional shallow method, achieved an increase of 0.57% compared to the deep cross-modal hashing (DCMH) in term of mAP. The ablation study is also implemented and the results demonstrates effectiveness of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call