Abstract

The large-scale pre-training image-text foundation models have excelled in a number of downstream applications. The majority of domain generalization techniques, however, have never focused on mining linguistic modal knowledge to enhance model generalization performance. Additionally, text information has been ignored in hyperspectral image classification (HSI) tasks. To address the aforementioned shortcomings, a Multi-modal Domain Generalization Network (MDG) is proposed to learn cross-domain invariant representation from cross-domain shared semantic space. Only the source domain (SD) is used for training in the proposed method, after which the model is directly transferred to the target domain (TD). Visual and linguistic features are extracted using the dual-stream architecture, which consists of an image encoder and a text encoder. A generator is designed to obtain extended domain (ED) samples that are different from SD. Furthermore, linguistic features are used to construct a cross-domain shared semantic space, where visual-linguistic alignment is accomplished by supervised contrastive learning. Extensive experiments on two datasets show that the proposed method outperforms state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call