Abstract

Infrared and visible image fusion methods aim to combine high-intensity instances and detail texture features into fused images. However, the ability to capture compact features under various adverse conditions is limited because the distribution of these multimodal features is generally cluttered. Therefore, targeted designs are necessary to constrain multimodal features to be compact. In addition, many attempts are not robust for low-quality images under various adverse conditions and the high fusion time of most fusion methods leads to less effective subsequent vision tasks. To address these issues, we propose a generative adversarial network with intensity attention modules and semantic transition modules, termed AT-GAN, which are more efficient to extract key information from multimodal images. The intensity attention modules aim to keep infrared instance features clearly and semantic transition modules attempt to filter out noise or other redundant features in visible texture. Moreover, an adaptive fused equilibrium point can be learned by a quality assessment module. Finally, experiments with variety of datasets reveal that the AT-GAN can adaptively learn features fusion and image reconstruction synchronously and further improve the timeliness under premise of fusion superiority of the proposed method over state of the art.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call