Existing infrared and visible image fusion algorithms have problems with reduced saliency for infrared targets and a lack of background texture information. To address this problem, a generative adversarial network for image fusion based on an attention mechanism is proposed in this study. This network consists of a generator, a discriminator, and an adaptive decision block. The generator employs a DenseNet-based cascade approach, which substantially lowers the information loss during the convolution process. A Markovian discriminator, unlike global discriminators, is embedded in the network. It divides images into patches, causing the network to focus on local regions and distinguishing the generated images as a whole. In addition, an adaptive decision block is included in the model to build an intensity dynamic mapping mechanism. To guide the computation for a certain content loss, a weighted map is created based on the intensity information of infrared images. Unlike the other loss functions, the weighted parameters of each component of the loss function are adaptively evaluated. To demonstrate the effectiveness of the proposed method, qualitative and quantitative experiments on TNO and RoadScene datasets are carried out. The experimental findings suggest that the proposed technique retains more significant characteristics in source images and produces fused images with rich texture details. Furthermore, the system noise resistance of each approach is examined. Under the condition of different noise, the fusion results of our approach still exhibit less distortion and a higher peak signal-to-noise ratio when compared to other state-of-the-art fusion methods.