Abstract

Although deep learning-based calibration methods can predict extrinsic camera parameters from a single image, the accuracy of these methods is severely degraded due to "data uncertainty" e.g., noisy and outlying input images. To address this problem, we propose a novel Data Uncertainty-Driven Loss (DUD Loss), which could derive the uncertainty from the input image, during the camera calibration process. Instead of estimating the camera extrinsic parameter as scalars, the proposed method models it as a Gaussian distribution with its variance represents the uncertainty of the input image. Hence, each camera parameter is no longer a deterministic scalar, but a probabilistic value with diverse distribution possibilities. With the help of DUD loss, the network can be trained to alleviate the perturbations caused by noisy input images. Furthermore, in the real-world single image camera calibration scenario, noisy input images, which yield larger variance/uncertainty could be effectively omitted without increasing the computation cost. To evaluate our method, we propose a large-scale Vehicle-Infrastructure Collaborative Autonomous Driving dataset, which contains millions of objects (e.g., cars, trucks, pedestrians) along with the camera calibration parameters (i.e., roll, pitch, yaw angles, and height of the camera setting position), dubbed as VICAD. Extensive experiments conducted on the proposed database demonstrate the effectiveness of the proposed method. Experimental results show that our proposed DUD loss has achieved a more accurate camera calibration prediction result than the state-of-the-art without further increasing computation cost.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.