Abstract
"This work is an initial activity in the OPREVU project. This project, funded by the Spanish Ministry of Science and Innovation, is aimed at the use of Virtual Reality (VR) and Artificial Intelligence tools to allow the extraction of Vulnerable Road Users (VRU) behaviour patterns in the event of pedestrian collisions, in order to optimize the Autonomous Emergency Braking (AEB) technologies, incorporated in the new generations of commercial vehicles. With the aim of developing and optimizing the current pedestrian identification systems, vehicle tests are performed on INSIA test track with different vehicles to analyse the behaviour of their AEB systems. These systems are equipped with a Lidar and a camera, whose joint operation allows detecting the proximity of the pedestrian and obtaining variables of interest to assess the automatic intervention of the braking system. The tests are inspired by the CPNA50 and CPNA25 tests, carried out by EURONCAP to validate and certify AEB systems. The reference variables are the TTC (Time-to-collision) and the TFCW. FCW (Forward Collision Warning) is a visual and acoustic signal that appears as a warning light or digitally on the instrument panel and warns of the presence of an obstacle in the vehicle's path, and TFCW is calculated from the sum of the driver's average reaction time and the time to stop if the driver applies pressure on the brake pedal until full detection. On the other hand, TTC is the time calculated from the distance and relative speed of the vehicle with respect to the pedestrian. If the TTC is less than TFCW, the system intervenes. By means of the CARSIM© simulation tool (vehicle-pedestrian-road), it is attainable to modify certain boundary parameters, such as the initial conditions of movement of the pedestrian and the vehicle, as well as their initial relative disposition at the beginning of each test. Along these lines, the virtual model incorporates reactions patterns for the pedestrian, such as stopping, running, or changing direction while crossing the road. These reaction patterns are defined by means of VR tests. The CARSIM© vehicle model integrates the fusion of camera and LiDAR data, and an operating algorithm to control the AEB activation. Using Machine Learning techniques, it is feasible to breed vehicle models based on behavioural patterns from the values obtained in the real tests, and to find out the correlation between the corresponding TTC and TFCW values with parameters measured by the calibration equipment, such as: the maximum speed and the time in which it is reached, the initial relative distance and the relative distance at the moment of AEB activation, or the average deceleration from the start of braking, among others. Hence, the data obtained on the INSIA test tracks allow the virtual models to be validated. Furthermore, the novelty of the approach is to consider the pedestrian reactions just before collision, extracted through users’ experiments made in ad-hoc VR environment, and to generate a more optimised and robust logic with a greater capacity for anticipation."
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.