Abstract

We study the problem of multi-focus image fusion. We propose a novel framework via convolutional network modeling, which directly learns a focused score map through source images. Further, the score map will be refined using some simple post-treatment. Finally, a high quality all-in focus image could be generated based on the score map and source images. The benefits of this work are three-fold: first, different from the most previous work which always adopt a manual feature extraction method to accomplish the fusion task, we leverage recent advances in convolutional network, which is a learning representation that can learn useful features automatically for various missions, to model the multi-focus image fusion task. Second, because of the scarcity of the label of nature focus-image, to train the model efficiently, we synthesize sufficient pairs of multi-focus image patches as the training set. Third, the trained model has high capacity that can distinguish which region is focused and which is not in the source images and therefore can produce an accurate score map for the fusion task. Experiments demonstrate that our method not only has a richer detail on the visual quality but also has a superior performance on the objective assessment, compared with those of recent several representative methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call