Abstract

Medical visual question answering (VQA) aims to correctly answer a clinical question related to a given medical image. Nevertheless, owing to the expensive manual annotations of medical data, the lack of labeled data limits the development of medical VQA. In this paper, we propose a simple yet effective data augmentation method, VQAMix, to mitigate the data limitation problem. Specifically, VQAMix generates more labeled training samples by linearly combining a pair of VQA samples, which can be easily embedded into any visual-language model to boost performance. However, mixing two VQA samples would construct new connections between images and questions from different samples, which will cause the answers for those new fabricated image-question pairs to be missing or meaningless. To solve the missing answer problem, we first develop the Learning with Missing Labels (LML) strategy, which roughly excludes the missing answers. To alleviate the meaningless answer issue, we design the Learning with Conditional-mixed Labels (LCL) strategy, which further utilizes language-type prior to forcing the mixed pairs to have reasonable answers that belong to the same category. Experimental results on the VQA-RAD and PathVQA benchmarks show that our proposed method significantly improves the performance of the baseline by about 7% and 5% on the averaging result of two backbones, respectively. More importantly, VQAMix could improve confidence calibration and model interpretability, which is significant for medical VQA models in practical applications. All code and models are available at https://github.com/haifangong/VQAMix.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call