Abstract

AbstractIn this paper, a novel conditional focus probability learning model, termed MCNN, is proposed for multi‐focus image fusion (MFIF). Given a pair of source images, their conditional focus probabilities can be generated by using the well‐trained MCNN, which is further converted into the binary focus masks to directly produce an all‐focus image with no postprocessing. To this end, a fully convolutional encoder is designed with two mutually coupled Siamese branches in MCNN, which include a coupling block that bridge between the two branches to provide conditional information to each other, at different layers, such that the encoder can more strongly extract conditional focus features and further encourage the decoder pixel‐wisely to give more robust conditional focus probabilities. Moreover, a hybrid loss is designed with a structural sparse fidelity loss and a structural similarity loss to force the network to learn more accurate conditional focus probabilities. Particularly, a convolutional norm with good structural group sparse is proposed, to construct the structural sparse fidelity loss. Simulation results substantiate the superiority of our MCNN over other state‐of‐the‐art, in terms of both visual perception and quantitative evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call