Abstract

Deep learning approaches have been gaining importance in several applications. However, the widespread use of these methods in safety-critical domains, such as Autonomous Driving, is still dependent on their reliability and trustworthiness. The goal of this paper is to provide a review of deep learning-based uncertainty methods and their applications to support perception tasks for Autonomous Driving. We detail significant Uncertainty Quantification and calibration methods, and their contributions and limitations, as well as important metrics and concepts. We present an overview of the state of the art of out-of-distribution detection and active learning, where uncertainty estimates are commonly applied. We show how these methods have been applied in the automotive context, providing a comprehensive analysis of reliable AI for Autonomous Driving. Finally, challenges and opportunities for future work are discussed for each topic.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.