Abstract

Abstract With the development and improvement of deep learning technology, its application and practice in modal data (image, speech, and text) has been achieved tremendously. In this paper, based on the neural network modality class model in deep learning, we analyze its adaptation to the visual question and answer system, propose a visual question and answer model based on the gated attention mechanism, and construct a question, and answer prediction mechanism adapted to the recurrent neural network transfer model. To address the problem of low accuracy of the model on complex problems, the inference network module is built using visual inference so that the model can extract complex problem features to improve the inference capability of the model. By predicting answers through semantic information about text and visual elements in images, correlations across modalities, and inference, advances in natural language processing and computational vision have led to improved answer accuracy in deep learning-based visual quiz models. Multiple sets of experiments show that models with deep learning inference capabilities answer complex questions with significantly higher accuracy than other existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call