Abstract
Considering the prospects for image fusion, it is necessary to guide the fusion to adapt to downstream vision tasks. In this paper, we propose an Adaptive Cross-Fusion Network (ACFNet) that utilizes an adaptive approach to fuse infrared and visible images, addressing cross-modal differences to enhance object detection performance. In ACFNet, a hierarchical cross-fusion module is designed to enrich the features at each level of the reconstructed images. In addition, a special adaptive gating selection module is proposed to realize feature fusion in an adaptive manner so as to obtain fused images without the interference of manual design. Extensive qualitative and quantitative experiments have demonstrated that ACFNet is superior to current state-of-the-art fusion methods and achieves excellent results in preserving target information and texture details. The fusion framework, when combined with the object detection framework, has the potential to significantly improve the precision of object detection in low-light conditions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.