Abstract

For driver assistance and autonomous driving systems, it is essential to predict the behaviour of other traffic participants. Usually, standard filter approaches are used to this end, however, in many cases, these are not sufficient. For example, pedestrians are able to change their speed or direction instantly. Also, there may be not enough observation data to determine the state of an object reliably, e.g. in case of occlusions. In those cases, it is very useful if a prior model exists, which suggests certain outcomes. For example, it is useful to know that pedestrians are usually crossing the road at a certain location and at certain times. This information can then be stored in a map which then can be used as a prior in scene analysis, or in practical terms to reduce the speed of a vehicle in advance in order to minimize critical situations. In this paper, we present an approach to derive such a spatio-temporal map automatically from the observed behaviour of traffic participants in everyday traffic situations. In our experiments, we use one stationary camera to observe a complex junction, where cars, public transportation and pedestrians interact. We concentrate on the pedestrians trajectories to map traffic patterns. In the first step, we extract trajectory segments from the video data. These segments are then clustered in order to derive a spatial model of the scene, in terms of a spatially embedded graph. In the second step, we analyse the temporal patterns of pedestrian movement on this graph. We are able to derive traffic light sequences as well as the timetables of nearby public transportation. To evaluate our approach, we used a 4 hour video sequence. We show that we are able to derive traffic light sequences as well as time tables of nearby public transportation.

Highlights

  • 1.1 General Problem DescriptionThe general issue with human locomotion is that we move faster than we can react

  • We will present an approach to generate knowledge about pedestrians and to record it in a dynamic map. In our opinion such a map can lower the risk for all traffic participants, raise the speed and comfort of autonomous driving as well as optimise the path planing of navigation systems

  • These time variant regions of interest (ROI) present the nodes of our walking path network and defined edges between this graph following Feuerhake et al (2011)

Read more

Summary

General Problem Description

The general issue with human locomotion is that we move faster than we can react. Traffic speed makes it impossible for vehicle drivers to break in time if an unexpected cross traffic appears all of a sudden. Any human driver adapts his behaviour to different traffic situations, which sometimes do not obey the law, wherefore these situations are difficult to interpret by drivers unfamiliar with the area or by algorithms Those traffic participants, where the most wrongdoings occur, are the group with the most dynamic behaviour, namely the pedestrians. We will present an approach to generate knowledge about pedestrians and to record it in a dynamic map. In our opinion such a map can lower the risk for all traffic participants, raise the speed and comfort of autonomous driving as well as optimise the path planing of navigation systems.

Graph Derivation
OUR APPROACH
Event Identification
Visualisation
Graph of Walking Path
Event Analysis
Subway Schedule
Database
Graph Determination
Periodic Time Events
EVALUATION
Graph Generation
Periodic Events
SUMMARY AND OUTLOOK
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.