Abstract

The purpose of the infrared and visible image fusion is to generate a fused image with rich information. Although most fusion methods can achieve good performance, there are still shortcomings in extracting feature information from source images, which make it difficult to balance the thermal radiation region information and texture detail information in the fused image. To address the above issues, an expectation maximization (EM) learning framework based on adversarial generative networks (GAN) for infrared and visible image fusion is proposed. The EM algorithm (EMA) can obtain maximum likelihood estimation for problems with potential variables, which is helpful in solving the problem of lack of labels in infrared and visible image fusion. The axial‐corner attention mechanism is designed to capture long‐range semantic information and texture information of the visible image. The multifrequency attention mechanism digs the relationships between features at different scales to highlight target information of infrared images in the fused result. Meanwhile, two discriminators are used to balance two different features, and a new loss function is designed to maximize the likelihood estimate of the data with soft class label assignments, which is obtained from the expectation network. Extensive experiments demonstrate the superiority of EMA‐GAN over the state‐of‐the‐art.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.