Infrared image (IR) and visible image (VI) fusion creates fusion images that contain richer information and gain improved visual effects. Existing methods generally use the operators of manual design, such as intensity and gradient operators, to mine the image information. However, it is hard for them to achieve a complete and accurate description of information, which limits the image fusion performance. To this end, a novel information measurement method is proposed to achieve IR and VI fusion. Its core idea is to guide a generator in achieving image fusion by learning the denoisers. Specifically, by using denoisers to restore fusion images with different noise interference to source images, a mutual competition relationship is formed between denoisers, which helps the generator thoroughly explore the data specificity of the source images and guide it to achieve more accurate feature representation. In addition, a semantic adaptive measurement loss function is proposed to constrain the generator, which fuses semantic information adaptively by considering the semantic information density of different source images. The results of quantitative and qualitative experiments have shown that the proposed method can achieve a higher quality information fusion and has a faster fusion speed on three public datasets when compared with advanced methods.