Abstract
Transfer learning is an effective way to alleviate the problem of insufficient samples in a hyperspectral image (HSI) classification. However, the present transfer learning-based methods usually transfer knowledge from a single source domain, such as the natural image domain. Therefore, these methods cannot simultaneously transfer spectral and spatial knowledge to the target domain in HSIs. Generally, the natural image has rich spatial structure and texture information, while the HSI has abundant spectral information. To better utilize the knowledge learned from natural image datasets and HSI datasets, we proposed a multimodal transfer feature fusion network (MTFFN) for HSI classification. In MTFFN, a dual-branch network structure is designed to transfer the two-modal knowledge from the natural image domain and the source HSI domain to the target domain in two branches, respectively. A multitask learning strategy is adopted to achieve feature fusion. The fused features are used to generate the final classification result. Moreover, a local attention mechanism is designed to extract more meaningful spectral features. Experiments on two public datasets show that the proposed method is effective ( <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/HuaipYan/MTFFN</uri> ).
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have