Abstract

Today's automotive cyber-physical systems for autonomous driving aim to enhance driving safety by replacing the uncertainties posed by human drivers with standard procedures of automated systems. However, the accuracy of in-vehicle perception systems may significantly vary under different operational conditions (e.g., fog density, light condition, etc.) and consequently degrade the reliability of autonomous driving. A perception system for autonomous driving must be carefully validated with an extremely large dataset collected under all possible operational conditions in order to ensure its robustness. The aforementioned dataset required for validation, however, is expensive or even impossible to acquire in practice, since most operational corners rarely occur in a real-world environment. In this paper, we propose to generate synthetic datasets at a variety of operational corners by using a parameterized cycle-consistent generative adversarial network (PCGAN) . The proposed PCGAN is able to learn from an image dataset recorded at real-world operational conditions with only a few samples at corners and synthesize a large dataset at a given operational corner. By taking STOP sign detection as an example, our numerical experiments demonstrate that the proposed approach is able to generate high-quality synthetic datasets to facilitate accurate validation.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.