Abstract
How to efficiently and accurately identify and extract the focused regions in the source image is a difficult problem in the field of multi-focus image fusion. The existing fusion methods suffer from color distortion, loss of detail information, and high time cost, which limit the subsequent processing and real-time application of fused images. Based on this, this paper proposes a multi-focus image fusion method based on Transformer and feedback mechanism. The method uses a combination of Transformer and convolutional neural network, and integrates the local information extracted by CNN and the global information obtained by transformer, which improves the accuracy of focus region identification. In addition, this paper uses a feedback mechanism to provide more contextual information so that the features can be fully utilized, which improves the performance of the network in feature fusion. This paper conducts comparison tests with seven advanced fusion methods on the Lytro and Grayscale datasets, and the results show that the algorithm in this paper is superior in both subjective and objective evaluations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.