Abstract

Long-wave infrared(thermal) images distinguish the target and background according to different thermal radiation. They are insensitive to light conditions, and cannot present details obtained from reflected light. By contrast, the visible images have high spatial resolution and texture details, but they are easily affected by the occlusion and light conditions. Combining the advantages of the two sources may generate a new image with clear targets and high resolution, which satisfy requirements in all-weather and all-day/night conditions. Most of the existing methods cannot fully capture the underlying characteristics in the infrared and visible images, and ignore complementary information between the sources. In this paper, we propose an end-to-end model (TSFNet) for infrared and visible image fusion, which is able to handle the sources simultaneously. In addition, it adopts an adaptive weight allocation strategy to capture the informative global features. Experiments on public datasets demonstrate the proposed fusion method achieves state-of-the-art performance, in both global visual quality and quantitative comparison.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.