Abstract

Multi-focus image fusion (MFF) is an effective way to eliminate the out-of-focus blur generated in the imaging process. The difficulties in distinguishing different blur levels and the lack of real supervised data make multi-focus image fusion remain a challenging task after decades of research. According to deep image prior (DIP) (Ulyanov et al., 2018), a neural network itself can capture the low-level statistics of a single image and is successfully used as a prior for solving many inverse problems without the need for handmade priors or priors learned from large-scale datasets. Motivated by this idea, we propose a novel multi-focus image fusion framework named ZMFF comprised of a deep image prior network to model the deep prior of the fused image and a deep mask prior network to model the deep prior of the focus map corresponding to each source image. Without the labor-intensive training pair collection, our method achieves zero-shot learning and avoids the domain shifting problem due to the inconsistency between the manually degraded multi-focus images and the real ones. As far as we know, it is the first unsupervised and untrained deep model for the MFF task. Extensive experiments on both synthetic and real-world datasets demonstrate the promising performance, generalization and flexibility of our approach. Source code is available at https://github.com/junjun-jiang/ZMFF.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call