Synchronous multi-band image fusion is a challenging, yet urgent task in the development of high-precision detection systems. This study proposes a novel method for synchronous fusion modeling of multi-band images based on task-interdependency. In the proposed method, the task of image fusion is divided into two mutually exclusive sub-tasks that produce bright thermal targets and obtain precise textural details. First, two generators with different network structures and several discriminators produce a preliminary fused image. Second, an image fusion strategy is defined using a model- and data-driven theory to obtain fused images. Then, each discriminator classifies the fused image and source images of each band to force the generators to produce the desired results. A novel loss function is constructed to enhance the fused effect by selecting the most significant gradient loss and loss of brightness. Finally, the network is trained based on a multi-generative adversarial framework.The trained generators can be used individually or jointly as a model for fusing multiple images. We verified our method with several datasets and determined that it outperforms other current methods.