Abstract

Visual Question Answering (VQA) in the medical domain has attracted more attention from research communities in the last few years due to its various applications. This paper investigates several deep learning approaches in building a medical VQA system based on ImageCLEF’s VQA-Med dataset, which consists of about 4K images with about 15K question-answer pairs. Due to the wide variety of the images and questions included in this dataset, the proposed model is a hierarchical one consisting of many sub-models, each tailored to handle certain questions. For that, a special model is built to classify the questions into four categories, where each category is handled by a separate sub-model. At their core, all of these models consist of pre-trained Convolution Neural Networks (CNN). In order to get the best results, extensive experiments are performed and various techniques are employed including Data Augmentation (DA), Multi-Task Learning (MTL), Global Average Pooling (GAP), Ensembling, and Sequence to Sequence (Seq2Seq) models. Overall, the final model achieves 60.8 accuracy and 63.4 BLEU score, which are competitive with the state-of-the-art results despite using less demanding and simpler sub-models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call