Abstract

The generic neural encoder-decoder framework for image captioning typically uses a convolution neural network to extract the image features and then uses a recurrent neural network to generate a sentence describing this image. The residual attention network is a model that achieves good results in image classification task, which is proved that this network is better than the classical convolution neural network in the ability of feature extraction. In this paper, we propose a combination of the residual attention network and the classical convolution to extract image spatial features, and then input this image spatial features to our visual attention module. At last, we use the decoder which consists of two long short-term memory(Two-LSTM) to generate a sentence describing the image. Our design scheme validates the results of BLEU-N in the MSCOCO dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call