Abstract
Multifocus image fusion (MFIF), as an efficient way to improve the visual effect of images with partial focus defects, is of great significance in the field of image enhancement. According to the imaging principle of the lens, we summarize the visual salience priors (VSP) from the daily photo scene and two relationships from MFIF. Thereby, an edge-sensitive model for MFIF is presented in this study. Supported by VSP, we consider the correlation between salience object detection (SOD) and MFIF, and select the former as a pre-training task. SOD provides the network with realistic depth of field and bokeh effects to learn, and enhances the network’s ability to extract and express the edges of focused objects. Meanwhile, given the scarcity of real multifocus training sets, we propose a randomized approach to generate massive training sets and pseudo-labels based on limited unlabeled data. Besides, two attention modules are designed based on isometric domain transformation (IDT) in the traditional edge-preservation field. IDT removes interference information from feature maps in a low-cost manner, thereby facilitating channel-wise and spatial-wise weight assignments. Experimental results on four datasets show that the performance of our model is superior to that of many supervised models, without the need of any real MFIF training set.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.