Vision-Language models(VLMs) have shown promising improvements on various visual tasks. Most existing VLMs employ two separate transformer-based encoders, each dedicated to modeling visual and language features independently. Because the visual features and language features are unaligned in the feature space, it is challenging for the multi-modal encoder to learn vision-language interactions. In this paper, we propose a V,isual-guided Hierarchical Iterative Fusion (VgHIF) method for VLMs in video action recognition, which acquires more discriminative vision and language representation. VgHIF leverages visual features from different levels in visual encoder to interact with language representation. The interaction is processed by the attention mechanism to calculate the correlation between visual features and language representation. VgHIF learns grounded video-text representation and supports many different pre-trained VLMs in a flexible and efficient manner with a tiny computational cost. We conducted experiments on the Kinetics-400 Mini Kinetics 200 HMDB51, and UCF101 using VLMs: CLIP, X-CLIP, and ViFi-CLIP. The experiments were conducted under full supervision and few shot settings, and compared with the baseline multi-modal model without VgHIF, the Top-1 accuracy of the proposed method has been improved to varying degrees, and several groups of results have achieved comparable results with state-of-the-art performance, which strongly verified the effectiveness of the proposed method.
Read full abstract