Abstract

This study is founded on the idea that an analysis of the visual gaze dynamics of pedestrians can increase our understanding of how important architectural features in urban environments are perceived by pedestrians. The results of such an analysis can lead to improvements in urban design. However, a technical challenge arises when trying to determine the gaze direction of pedestrians recorded on video. High ‘noise’ levels and the subtlety of human gaze dynamics hamper precise calculations. However, as robots can be programmed and analysed more efficiently than humans, this study uses them for developing and training a gaze analysis system with the aim to later apply it to human video data using the machine learning technique of manifold alignment. For this study, a laboratory was set up to become a model street scene in which autonomous humanoid robots of approximately 55 cm in height simulate the behaviour of human pedestrians. The experiments compare the inputs from several cameras as the robot walks down the model street and changes its behaviour upon encountering ‘visually attractive objects’. Overhead recordings and the robot's internal joint signals are analysed after filtering to provide ‘true’ data against which the recorded data can be compared for accuracy testing. A central component of the research is the calculation of a torus-like manifold that represents all the different three-dimensional (3D) head directions of a robot head and allows the ordering of extracted 3D gaze vectors obtained from video sequences. We briefly describe how the obtained multidimensional trajectory data can be analysed by using a temporal behaviour analysis technique based on support vector machines that was developed separately.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call