Abstract
Infrared and visible image fusion is of great importance in all-weather target detection and surveillance work. However, the existing fusion algorithms have problems such as missing information and vague details. Aiming at the above problems, this paper proposes a dual-branch fusion network based on semantic and detail information (DBSD). The dual-branch structure is used to fully acquire the structural and textural features of multi-modal images to obtain the images with complete features and clear details. The network adds multi-receptive blocks and large kernel blocks in semantic branches and uses full-scale skip connections to achieve multi-directional and multi-scale feature extraction. Densely connected blocks are used in detail branches to improve the reusability of features. The fusion block based on channel and spatial attention mechanisms is used to integrate multi-scale semantic and detail features to enhance key features in the fusion results. Through a series of experiments conducted on three publicly available datasets to verify the superiority of DBSD.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.