Abstract
Image-text matching has become a challenging task in the multimedia analysis field. Many advanced methods have been used to explore local and global cross-modal correspondence in matching. However, most methods ignore the importance of eliminating potential irrelevant features in the original features of each modality and cross-modal common feature. Moreover, the features extracted from regions in images and words in sentences contain cluttered background noise and different occlusion noise, which negatively affects alignment. Different from these methods, we propose a novel DCT-Transformer Adversarial Network (DTAN) for image-text matching in this paper. This work can obtain an effective metric based on two aspects: i) DCT-Transformer uses DCT (Discrete Cosine Transform) method based on a transformer mechanism to extract multi-domain common representations and eliminate irrelevant features from different modalities (inter-modal). Among them, DCT divides multi-modal content into chunks of different frequencies and quantifies them. ii) The adversarial network introduces an adversary idea by combining the original features of various single modalities and the multi-domain common representation, alleviating the background noise within each modality (intra-modal). The proposed adversarial feature augmentation method can easily obtain the common representation that is only useful for alignment. Extensive experiments are completed on the benchmark datasets Flickr30K and MS-COCO, demonstrating the superiority of the DTAN model over the state-of-the-art methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.