Abstract

Neural networks in autonomous vehicles suffer from overfitting, poor generalizability, and untrained edge cases due to limited data availability. Researchers often synthesize randomized edge-case scenarios to assist in the training process, though simulation introduces the potential for overfitting to latent rules and features. Automating worst-case scenario generation could yield informative data for improving self-driving. To this end, we present a “Physically Adversarial Intelligent Network”, wherein self-driving vehicles interact aggressively in the CARLA simulation. We train two agents, a protagonist, and an adversary, using dueling double deep Q networks with prioritized experience replay. The coupled networks alternately seek to collide and avoid collisions such that the “defensive” avoidance algorithm increases the mean time to failure and distance traveled under non-hostile operating conditions. The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures resulting in collisions than the agent trained without an adversary.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call