Abstract

Infrared and visible image fusion, which highlights radiometric and detailed texture information and completely and accurately describes objects, is a long-standing and well-studied task in computer vision. Existing convolutional neural network-based approaches that leverage end-to-end networks to fuse infrared and visible images have made significant progress. However, most approaches typically extract the features in the encoder segment and use a coarse fusion strategy. Unlike these algorithms, this study proposes a multiscale receptive field amplification fusion network (MRANet) to effectively extract the local and global features from images. Particularly, we extract long-range information in the encoder segment using a convolutional residual structure as the main backbone and a simplified uniformer as an auxiliary backbone, both of which are ResNet-inspired. Additionally, we propose an effective multiscale fusion strategy based on an attention mechanism to integrate the two modalities. Extensive experiments demonstrate that MRANet performs efficiently on image fusion datasets.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.