Abstract

Multi-focus image fusion aims at combining multiple partially focused images of the same scenario into an all-focused image, and one of the most effective methods for image fusion is sparse representation. Traditional sparse representation based fusion method uses all of the image patches for dictionary learning, which brings unvalued information, resulting in artifacts and extra calculating time. To remove unvalued information and build a compact dictionary, in this sparse representation based fusion approach, a novel dictionary constructing method based on joint patch grouping and informative sampling is proposed. Nonlocal similarity is introduced into joint patch grouping, and each source image is not considered independently. Patches of all source images with similar structures are flagged as a group, and only one class of informative image patch is selected in dictionary learning for simplifying the calculation. The orthogonal matching pursuit (OMP) algorithm is performed to obtain sparse coefficients, and max-L1 fusion role is adopted to reconstruct fused images. The experimental results show the superiority of the proposed approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.