Abstract

In this paper, we propose a novel unsupervised continual-learning generative adversarial network for unified image fusion, termed as UIFGAN. In our model, for multiple image fusion tasks, a generative adversarial network for training a single model with memory in a continual-learning manner is proposed, rather than training an individual model for each fusion task or jointly training multiple tasks. We use elastic weight consolidation to avoid forgetting what has been learned from previous tasks when training multiple tasks sequentially. In each task, the generation of the fused image comes from the adversarial learning between a generator and a discriminator. Meanwhile, a max-gradient loss function is adopted for forcing the fused image to obtain richer texture details of the corresponding regions in two source images, which applies to most typical image fusion tasks. Extensive experiments on multi-exposure, multi-modal and multi-focus image fusion tasks demonstrate the advantages of our method over the state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call