Abstract
RGB cameras are one of the most relevant sensors for autonomous driving applications. It is undeniable that failures of vehicle cameras may compromise the autonomous driving task, possibly leading to unsafe behaviors when images that are subsequently processed by the driving system are altered. To support the definition of safe and robust vehicle architectures and intelligent systems, in this paper we define the failure modes of a vehicle camera, together with an analysis of effects and known mitigations. Further, we build a software library for the generation of the corresponding failed images and we feed them to six object detectors for mono and stereo cameras and to the self-driving agent of an autonomous driving simulator. The resulting misbehaviors with respect to operating with clean images allow a better understanding of failures effects and the related safety risks in image-based applications.
Highlights
Autonomous driving is attracting growing attention in recent years, with ever-increasing demand and investments from the industry [17]
When cameras are used for safety-critical applications e.g., in the autonomous driving domain, the definition of failure modes would benefit software and system engineers to build resilient architecture or to assess application robustness
This paper identifies the failure modes of a vehicle camera, describing their effects on the output image
Summary
Autonomous driving is attracting growing attention in recent years, with ever-increasing demand and investments from the industry [17]. Cameras are amongst the failures, their causes, and their potential effects on the syscheapest solutions to build autonomous driving systems tem. In modes, using a python library that we developed and that Section 4 we execute object detectors on altered images is publicly available at [9]. At this point, we execute six ob- from the KITTI dataset.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have