Abstract

We propose an estimation method for free-form Visual Question Answering (VQA) from human brain activity, brain decoding VQA. The task of VQA in the field of computer vision is generating an answer given an image and a question about its contents. The proposed method can realize answering arbitrary visual questions about images from brain activity measured by functional Magnetic Resonance Imaging (fMRI) while viewing the same images. We enable estimating various information from brain activity via a unique VQA model, which can realize a more detailed understanding of images and complex reasoning. In addition, we newly make use of un-labeled images not used in the training phase to improve the performance of the transformation, since fMRI datasets are generally small. The proposed method can answer a visual question from a little amount of fMRI data measured while subjects are viewing images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call