Safe real-world navigation for autonomous vehicles (AVs) requires robust perception and decision-making, especially in complex, multi-agent scenarios. Existing AV datasets are limited by their inability to capture diverse V2X communication scenarios, lack of synchronized multi-sensor data, and insufficient coverage of critical edge cases in multi-vehicle interactions. This paper introduces VRDeepSafety, a novel and scalable VR simulation platform that overcomes these limitations by integrating Vehicle-to-Everything (V2X) communication, including realistic latency, packet loss, and signal prioritization, to enhance AV accident prediction and mitigation. VRDeepSafety generates comprehensive datasets featuring synchronized multi-vehicle interactions, coordinated V2X scenarios, and diverse sensor data, including visual, LiDAR, radar, and V2X streams. Evaluated with our novel deep-learning model, VRFormer, which uniquely fuses VR sensor data with V2X using a probabilistic Bayesian inference, as well as a hierarchical Kalman and particle filter structure, VRDeepSafety achieved an 85% accident prediction accuracy (APA) at a 2 s horizon, a 17% increase in 3D object detection precision (mAP), and a 0.3 s reduction in response time, outperforming a single-vehicle baseline. Furthermore, V2X integration increased APA by 15%. Extending the prediction horizon to 3–4 s reduced APA to 70%, highlighting the trade-off between prediction time and accuracy. The VRDeepSafety high-fidelity simulation and integrated V2X provide a valuable and rigorous tool for developing safer and more responsive AVs.
Read full abstract