Abstract

Although the unique advantages of optical and synthetic aperture radar (SAR) images promote their fusion, the integration of complementary features from the two types of data and their effective fusion remains a vital problem. To address that, a novel framework is designed based on the observation that the structure of SAR images and the texture of optical images look complementary. The proposed framework, named SOSTF, is an unsupervised end-to-end fusion network that aims to integrate structural features from SAR images and detailed texture features from optical images into the fusion results. The proposed method adopts the nest connect-based architecture, including an encoder network, a fusion part, and a decoder network. To maintain the structure and texture information of input images, the encoder architecture is utilized to extract multi-scale features from images. Then, we use the densely connected convolutional network (DenseNet) to perform feature fusion. Finally, we reconstruct the fusion image using a decoder network. In the training stage, we introduce a structure-texture decomposition model. In addition, a novel texture-preserving and structure-enhancing loss function are designed to train the DenseNet to enhance the structure and texture features of fusion results. Qualitative and quantitative comparisons of the fusion results with nine advanced methods demonstrate that the proposed method can fuse the complementary features of SAR and optical images more effectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call