Abstract
The Visual Questions and Answers (VQA) task is to obtain the answer to a question by combining the image information with the image related textual question information. On the one hand, the representation of problem features in traditional models is mostly the extraction of coarse-grained information, resulting in the inability to effectively utilize the deep semantics of text information. On the other hand, the traditional fusion of text and image features mostly adopts a series fusion method to simply splice the feature vectors of different modalities. This simple fusion method cannot deal with the problem of information redundancy and conflict between different modal features, which ultimately affects the accuracy of Visual Question Answering. To solve the above problems, this paper proposes a text feature representation module (MGT) for multi-granularity information. It mainly uses the method of hierarchical expansion convolution to preserve text information from multiple levels and multiple angles to improve the utilization of text features. This paper also proposes a Transformer-based multimodal fusion module (BTMF). It mainly uses the internal correlation between the picture and text modal features to dynamically adjust the weight of each modal feature. Simultaneously learn different modal information while keeping the contextual information unchanged. This paper verifies the effectiveness of the method on the VQA2.0 dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.