Abstract

In this paper, we propose multi-focus image fusion using light field data and convolutional neural networks (CNNs). We mainly investigate data augmentation for network training. We adopt light field data to generate all-clear images and focus maps for training by refocusing. We produce focus maps at the pixel level in fully convolutional networks from multi-focus images. This is different from other CNN-based fusion methods that treat multi-focus fusion as binary classification to produce focus maps at the patch level. To train the proposed networks, we construct a multi-focus image dataset which contains both multi-focus images and their ground truth (all-clear images and focus maps) using light field data. Experimental results show that the proposed method generates accurate focus maps close to the ground truth as well as outperforms state-of-the-art fusion methods in terms of quantitative measurements.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call