Abstract
Synchronous multi-band image fusion is a challenging, yet urgent task in the development of high-precision detection systems. This study proposes a novel method for synchronous fusion modeling of multi-band images based on task-interdependency. In the proposed method, the task of image fusion is divided into two mutually exclusive sub-tasks that produce bright thermal targets and obtain precise textural details. First, two generators with different network structures and several discriminators produce a preliminary fused image. Second, an image fusion strategy is defined using a model- and data-driven theory to obtain fused images. Then, each discriminator classifies the fused image and source images of each band to force the generators to produce the desired results. A novel loss function is constructed to enhance the fused effect by selecting the most significant gradient loss and loss of brightness. Finally, the network is trained based on a multi-generative adversarial framework.The trained generators can be used individually or jointly as a model for fusing multiple images. We verified our method with several datasets and determined that it outperforms other current methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.