Abstract

Deep Learning (DL) has been recently utilized for image fusion applications. The aim of DL based multi-focus image fusion methods is to create the better decision map for fusing the input multi-focus images compared with the previous traditional methods in the spatial and transform domains. Hence, Convolution Neural Networks (CNN) and Fully Convolution Networks (FCN) that were used in recent multi-focus image fusion methods have the unsuitable initial segmented decision map, and their architectures have a large number of parameters which need to be updated during the training process. This paper proposed a simple DL based multi-focus image fusion method which is inspired by FCN model. The proposed architecture substituted the Fully Connected (FC) layer with the adequate convolution layers. Also, the proposed architecture removed the pooling-layer (Max-pooling) since it eliminates the useful patches details for the task of multi-focus images. The number of parameters of the devised architecture that should be learned is less than 2% of the previous FCN based multi-focus image fusion. All of these novelties influence the initial decision map of the proposed network to be accurate and cleaner than that of the other methods. The conducted experiments on the famous real multi-focus images show that the proposed network is outperforming the state of the art methods regarding the qualitative and quantitative assessments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.