Abstract

Recent years have witnessed remarkable progress of learning-based methods in single image dehazing. Among them, dehazing methods trained on the synthetic images cannot adapt to real hazy ones due to the domain gap, and domain adaptation methods only concentrate on creating a mapping or extracting shared features regardless of representations of deep features. In this paper, we propose an innovative cross-domain dehazing architecture that integrates domain adaptation and disentangled representation. Specifically, we first construct a shared encoder to map synthetic and real hazy images to a latent space and narrow their domain gap at the feature level. Then we utilize a separator to separate the hazy image features into haze-free representations and haze ones. We introduce feature consistency loss and orthogonal loss to further guide the disentanglement process. And then, we utilize a decoder to produce clean images from haze-free features and introduce both supervised and unsupervised losses to guide the training process. Moreover, we also propose to reconstruct the images from both domains by recombining the separated features, which guarantees information completeness. Experiments demonstrate that our method is on par with state-of-the-art methods. Codes are available at https://github.com/lixiaopeng123456/DADRnet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call