Abstract

Deep learning (DL) models are becoming effective in solving computer-vision tasks such as semantic segmentation, object tracking, and pose estimation on real-world captured images. Reliability analysis of autonomous systems that use these DL models as part of their perception systems have to account for the performance of these models. Autonomous systems with traditional sensors have tried-and-tested reliability assessment processes with modular design, unit tests, system integration, compositional verification, certification, etc. In contrast, DL perception modules relies on data-driven or learned models. These models do not capture uncertainty and often lack robustness. Also, these models are often updated throughout the lifecycle of the product when new data sets become available. However, the integration of an updated DL-based perception requires a reboot and start afresh of the reliability assessment and operation processes for autonomous systems. In this paper, we discuss three challenges related to specifying, verifying, and operating systems that incorporate DL-based perception. We illustrate these challenges through two concrete and open source examples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call