Abstract

Today, vehicles contain a wide range of electronic driver assistance systems. These systems, for example Anti-lock Braking System (ABS) or Electronic Stability Control (ESC), increase car safety and on a more general level even road safety. More complex Advanced Driver Assistance Systems (ADAS), like Lane Departure Warning, Overtaking Assistant, Collision Warning or Emergency Breaking do not only observe the parameters of the vehicle itself, but also require information regarding the environment. Future applications, which target autonomous driving, need an even more detailed understanding of the vehicle’s environment and the current driving situation. Therefore, vehicles are equipped with a number of sensors, which enable the perception of the vehicle’s surroundings including other road users. But the sensors generaly used deliver a huge amount of raw and unrefined data, from which the necessary information needs to be extracted. For instance, for camera sensors, an algorithm called Scene Labeling can be used to detect relevant objects in camera images. It assigns every pixel of an input image to a semantic class (e.g., road, car, free space etc.) and can therefore be used to extract detailed information from the scene.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call