Abstract
Haze severely degrades the visibility of scene objects and deteriorates the performance of autonomous driving, traffic monitoring, and other vision-based intelligent transportation systems. As a potential remedy, we propose a novel unified Transformer with semantically contrastive learning for image dehazing, dubbed USCFormer. USCFormer has three key contributions. First, USCFormer absorbs the respective strengths of CNN and Transformer by incorporating them into a unified Transformer format. Thus, it allows the simultaneous capture of global-local dependency features for better image dehazing. Second, by casting clean/hazy images as the positive/negative samples, the contrastive constraint encourages the restored image to be closer to the ground-truth images (positives) and away from the hazy ones (negatives). Third, we regard the semantic information as important prior knowledge to help USCFormer mitigate the effects of haze on the scene and preserve image details and colors by leveraging intra-object semantic correlation. Experiments on synthetic datasets and real-world hazy photos fully validate the superiority of USCFormer in both perceptual quality assessment and subjective evaluation. Code is available at https://github.com/yz-wang/USCFormer.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Intelligent Transportation Systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.