Recently, Vision Transformers (ViTs) have become the mainstream models in image captioning tasks. ViTs take all image tokens as inputs to extract visual features, which may cause concerns about worthless tokens, and meanwhile lead to a huge amount of computation. This paper proposes a novel token reduction module to remedy this drawback. Specifically, the module employs ViTs to embed the input tokens, and adaptively learns informative visual tokens in way of token attention on the channel-spatial granularity. Furthermore, an attribute prediction module is designed to strengthen the relationship between vision and language. Technically, the attribute prediction is achieved via a classifier in form of Multi-Layer Perceptron (MLP). Both the visual representations and attribute representations are obtained by Transformers, which are then combined as the input of the Transformer decoder for caption generation. All of the modules are constructed in an encoder–decoder framework and support the end-to-end learning. Experiment results have shown that our approach can effectively reduce the computational cost of ViTs while maintaining comparable performance on the MS COCO and NoCaps datasets. For example, by pruning more than 70% of the input tokens, our approach greatly reduces GFLOPs by 41% ∼ 47%, while preserving its accuracy of a 142.1 CIDEr score on the MS COCO dataset.
Read full abstract