Multi-View Multi-Focus Image Fusion: A Novel Benchmark Dataset and Method
Multi-focus image fusion fuses multiple images focused on different depths to generate a clear image covering the whole scene. However, the existing multi-focus image fusion methods do not consider the movement of the camera or objects in actual shooting. To address this, we propose an end-to-end deep learning network to generate the all-in-focus image from multi-view multi-focus images. Specifically, our method first warps the multi-view multi-focus images to a unified camera view by homography transformation matrices, and measures the defocus degree of co-located image patches through a focus information evaluation mechanism. Finally, our fusion network applies an adaptive fusion scheme to fuse the detected image patches into a clear image. For testing our fusion network, a multi-view multi-focus image benchmark dataset (MVMFI) is constructed with more than 1000 image sequences. Experiments results demonstrate that our method outperforms the state-of-the-art methods both qualitatively and quantitatively. MVMFI dataset is available at https://github.com/North-Li/MVMFI.