Abstract
In the field of image fusion, the integration of infrared and visible images aims to combine complementary features into a unified representation. However, not all regions within an image bear equal importance. Target objects, often pivotal in subsequent decision-making processes, warrant particular attention. Conventional deep-learning approaches for image fusion primarily focus on optimizing textural detail across the entire image at a pixel level, neglecting the pivotal role of target objects and their relevance to downstream visual tasks. In response to these limitations, TDDFusion, a Target-Driven Dual-Branch Fusion Network, has been introduced. It is explicitly designed to enhance the prominence of target objects within the fused image, thereby bridging the existing performance disparity between pixel-level fusion and downstream object detection tasks. The architecture consists of a parallel, dual-branch feature extraction network, incorporating a Global Semantic Transformer (GST) and a Local Texture Encoder (LTE). During the training phase, a dedicated object detection submodule is integrated to backpropagate semantic loss into the fusion network, enabling task-oriented optimization of the fusion process. A novel loss function is devised, leveraging target positional information to amplify visual contrast and detail specific to target objects. Extensive experimental evaluation on three public datasets demonstrates the model's superiority in preserving global environmental information and local detail, outperforming state-of-the-art alternatives in balancing pixel intensity and maintaining the texture of target objects. Most importantly, it exhibits significant advantages in downstream object detection tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.