Abstract

Automated vehicles will provide greater transport convenience and interconnectivity, increase mobility options to young and elderly people, and reduce traffic congestion and emissions. However, the largest obstacle towards the deployment of automated vehicles on public roads is their safety evaluation and validation. Undeniably, the role of cameras and Artificial Intelligence-based (AI) vision is vital in the perception of the driving environment and road safety. Although a significant number of studies on the detection and tracking of vehicles have been conducted, none of them focused on the role of vertical vehicle dynamics. For the first time, this paper analyzes and discusses the influence of road anomalies and vehicle suspension on the performance of detecting and tracking driving objects. To this end, we conducted an extensive road field study and validated a computational tool for performing the assessment using simulations. A parametric study revealed the cases where AI-based vision underperforms and may significantly degrade the safety performance of AVs.

Highlights

  • The functionality of automated driving systems (ADS) is grounded on a processing chain of perception, control and vehicle platform manipulation [1]

  • Since the algorithms were solely trained on real-world images, there is a chance that the algorithm’s sensitivity to virtual data varies significantly

  • The same behavior accounts for the largest bump height, but at significantly lower Intersection over Union (IoU) values and even reaching a minimum of 0% intersection

Read more

Summary

Introduction

The functionality of automated driving systems (ADS) is grounded on a processing chain of perception, control and vehicle platform manipulation [1]. Camera sensors pose the most cost-effective solution and have found their way into most ADS [2]. Camera-based vehicle perception can be divided into three levels of complexity that build on each other: Object detection, tracking and behavior analysis [3,4]. Object detection is in turn divided into appearanceand motion-based solutions [5]. While motion-based approaches rely on a subject’s motion signature in a continuous image stream [6,7], appearance-based detectors are again subdivided into one- and two-step procedures [8].

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call