Abstract

Image captioning is a task of generating natural language descriptions for images, which is important and challenging in computer vision. This task involves both visual and linguistic understanding, which makes it complex and difficult to solve. In this paper, we propose a novel and efficient image captioning model, named MobileNet V3-Transformer (Mob-Tran), which combines the advantages of both convolutional and transformer architectures. Our model uses the improved MobileNet V3 and the transformer’s encoder as the encoder to extract and enhance visual features from images, and uses the transformer’s decoder as the decoder to generate captions based on the encoded features. The MobileNet V3 model used in this experiment has had its classifier removed. This experiment combined automatic and human evaluation to evaluate multiple models, including MobileNet V3-Transformer, using ten automatic evaluation metrics (BLEU_1, BLEU_2, BLEU_3, BLEU_4, CIDEr, ROUGE, METEOR, Model Storage Size, Model Training Time and Model Inference Time) and five human evaluation criteria (Grammaticality, Adequacy, Logic, Readability, and Humanness). The experimental results demonstrate that our model is capable of producing high-quality image captions with low complexity and high efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call