Abstract

Multifocus image fusion is a demanding research field due to the utilization of modern imaging devices. Generally, the scene to be captured contains objects at different distances from these devices and so a set of multifocus images of the scene is captured with different objects in-focus. However, to improve the situational awareness of the captured scene, these sets of images are required to be fused together. Therefore, a multifocus image fusion algorithm based on Convolutional Neural Network (CNN) and triangulated fuzzy filter is proposed. A CNN is used to extract information regarding focused pixels of input images and the same is used as fusion rule for fusing the input images. The focused information so extracted may still need to be refined near the boundaries. Therefore, asymmetrical triangular fuzzy filter with the median center (ATMED) is employed to correctly classify the pixels near the boundary. The advantage of using this filter is to rely on precise detection results since any misdetection may considerably degrade the fusion quality. The performance of the proposed algorithm is compared with the state-of-art image fusion algorithms, both subjectively and objectively. Various parameters such as edge strength ([Formula: see text]), fusion loss (FL), fusion artifacts (FA), entropy ([Formula: see text]), standard deviation (SD), spatial frequency (SF), structural similarity index measure (SSIM) and feature similarity index measure (FSIM) are used to evaluate the performance of the proposed algorithm. Experimental results proved that the proposed fusion algorithm produces a fused image that contains all-in-one focused pixels and is better than those obtained using other popular and latest image fusion works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call