Abstract
Infrared and visible image fusion aims to synthesize a single fused image that not only contains salient targets and abundant texture details but also facilitates high-level vision tasks. However, the existing fusion algorithms unilaterally focus on the visual quality and statistical metrics of fused images but ignore the demands of high-level vision tasks. To address these challenges, this paper bridges the gap between image fusion and high-level vision tasks and proposes a semantic-aware real-time image fusion network (SeAFusion). On the one hand, we cascade the image fusion module and semantic segmentation module and leverage the semantic loss to guide high-level semantic information to flow back to the image fusion module, which effectively boosts the performance of high-level vision tasks on fused images. On the other hand, we design a gradient residual dense block (GRDB) to enhance the description ability of the fusion network for fine-grained spatial details. Extensive comparative and generalization experiments demonstrate the superiority of our SeAFusion over state-of-the-art alternatives in terms of maintaining pixel intensity distribution and preserving texture detail. More importantly, the performance comparison of various fusion algorithms in task-driven evaluation reveals the natural advantages of our framework in facilitating high-level vision tasks. In addition, the superior running efficiency allows our algorithm to be effortlessly deployed as a real-time pre-processing module for high-level vision tasks. The source code will be released at https://github.com/Linfeng-Tang/SeAFusion.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.