Transformers have been recognized as powerful tools for various cross-modal tasks due to their superior ability to perform representation learning through self-attention. Existing transformer-based cross-modal models can be categorized into single-stream and dual-stream ones. By performing fine-grained interaction with self-attention on the cross-modal concatenated features, the former can simultaneously learn intra- and inter-modal correlations. However, this simple concatenation treats the inputs of different modalities equally; as a result, the heterogeneous differences between modalities are ignored, leading to a modality gap. The latter process the inputs of different modalities separately, then perform cross-modal interaction on the subsequently fused networks, resulting in a failure to integrate the fine-grained correlations of both intra- and inter-modality in a uniform module. To this end, we propose an effective heterogeneous graph transformer for dual-stream cross-modal representation learning, named CrossFormer, which constructs a heterogeneous graph as a bridge to achieve fine-grained intra- and inter-modal interaction on a dual-stream network. Specifically, we first represent multi-modal data with a heterogeneous graph, then develop a dual-positional encoding strategy that enables the heterogeneous graph to obtain the relative positional information. Finally, a dual-stream self-attention is performed on the heterogeneous graph, bridging the gap between modalities and effectively capturing fine-grained intra- and inter-modal interactions simultaneously. Extensive experiments on various cross-modal tasks demonstrate the superiority of our method.
Read full abstract