Abstract
Self-supervised depth estimation has achieved remarkable results in sunny weather. However, in the foggy scenes, their performance is limited because of the low contrast and limited visibility caused by the fog. To address this problem, an end-to-end feature separation network for self-supervised depth estimation of fog images is proposed. We take paired clear and synthetic foggy images as input, separate the image information into interference information (illumination, fog, etc.) and invariant information (structure, texture, etc.) by a feature extractor with orthogonality loss. The invariant information is used to estimate depth. Meanwhile, similarity loss is introduced to constrain the fog image depth using the depth of the clear image as a pseudo-label, and an attention module and reconstruction loss are added to refine the output depth, so that better depth maps can be obtained. Then, real-world fog images are used for fine-tuning, which effectively reduces the domain gap between synthetic data and real data. Experiments show that our approach produces advanced results on both synthetic datasets and Cityscape datasets, demonstrating the superiority of our approach.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.