Abstract

Medical Visual Question Answering (Med-VQA) helps answer medical questions raised by patients automatically so as to relieve the shortage of experienced doctors. Cross-modal feature alignment is a major challenge of Med-VQA. Moreover, it is critical to exploit sufficient semantic features with the consideration of characteristic of medical images and language. In this paper, we propose a novel From Image type point To Sentence (FITS) method to tackle the above challenge. In particular, the type of the medical images is represented as a type point which is further considered in the question sentence representation. The combined representation aims to optimize the feature distribution in an embedding space and thus enhances the ability of semantic alignment. Type point is also used in two feature extraction modules for medical questions and images respectively, which can efficiently improve the reasoning ability of different modalities, and further enhance the applicability of the fusion method for Med-VQA. The experimental results show that FITS outperforms all the previous approaches in terms of accuracy especially in open-ended questions significantly.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call