Abstract

Understanding traffic scene images taken from vehicle mounted cameras is important for high-level tasks, such as advanced driver assistance systems and autonomous driving. It is a challenging problem due to large variations under different weather or illumination conditions. In this paper, we tackle the problem of traffic scene understanding from a cross-domain perspective. We attempt to understand the traffic scene from images taken from the same location but under different weather or illumination conditions (e.g., understanding the same traffic scene from images on a rainy night with the help of images taken on a sunny day). To this end, we propose a dense correspondence-based transfer learning (DCTL) approach, which consists of three main steps: 1) extracting deep representations of traffic scene images via a fine-tuned convolutional neural network; 2) constructing compact and effective representations via cross-domain metric learning and subspace alignment for cross-domain retrieval; and 3) transferring the annotations from the retrieved best matching image to the test image based on cross-domain dense correspondences and a probabilistic Markov random field. To verify the effectiveness of our DCTL approach, we conduct extensive experiments on a challenging data set, which contains 1828 images from six weather or illumination conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.