Abstract

Multi-focus image fusion plays an important role in the field of image processing for its ability in solving the depth-of-focus limitation problem in optical lens imaging by fusing a series of partially focused images of the same scene. The improvements on various fusion methods focus on the image decomposition methods and the fusion strategies. However, most decompositions are separately conducted on each image, which fails to sufficiently consider the nature of multiple images in fusion tasks, and insufficiently explores the consistent and inconsistent features of two source images simultaneously. This paper proposes a new cooperative image multiscale decomposition (CIMD) based on the mutually guided filter (MGF). With CIMD, two source multi-focus images are simultaneously decomposed into base layers and detailed layers through the iterative operation of MGF cooperatively. A saliency detection based on a mean-guide combination filter is adopted to guide the fusion of detailed layers and a spatial frequency-based fusion strategy is used to fuse the luminance and contour features in the base layers. The experiments are carried on 28 pairs of publicly available multi-focus images. The fusion results are compared with 7 state-of-the-art multi-focus image fusion methods. Experimental results show that the proposed method has the better visual quality and objective assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call