Abstract

Due to the limitations of equipment and technology, few images captured by a single sensor can describe all the information in a scene. Multi-focus image fusion techniques provide an effective means of integrating images acquired from different sensors to produce a single fused image that is richer in scene information. To obtain better fusion results, we propose a novel deep learning structure for multi-focus image fusion to achieve end-to-end network structure without post-processing procedures. Specifically, an innovative Siamese multi-scale feature extraction module was introduced to extract multi-scale image features. Besides, aiming to enhance the quality of the fused image, we further designed a new adaptive fusion strategy by using the attention mechanism instead of following the previous manual design. Extensive experimental results confirm that the proposed algorithm is more effective, which surpasses the existing state-of-the-art multi-focus image fusion methods both quantitatively and qualitatively. Furthermore, additional expanded experiment conducted on the infrared and visible image dataset demonstrates that the proposed fusion framework is also capable of performing other fusion tasks with excellent fusion performance. The code of the proposed fusion method is available at <uri xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">https://github.com/govenda/</uri> MSPA-Fuse.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.