Abstract

Depth perception through stereo vision is an important feature of biological and artificial vision systems. While biological systems can compute disparities effortlessly, it requires intensive processing for artificial vision systems. The computing complexity resides in solving the correspondence problem – finding matching pairs of points in the two eyes. Inspired by the retina, event-based vision sensors allow a new constraint to solve the correspondence problem: time. Relying on precise spike-time, spiking neural networks can take advantage of this constraint. However, disparities can only be computed from dynamic environments since event-based vision sensors only report local changes in light intensity. In this paper, we show how microsaccadic eye movements can be used to compute disparities from static environments. To this end, we built a robotic head supporting two Dynamic Vision Sensors (DVS) capable of independent panning and simultaneous tilting. We evaluate the method on both static and dynamic scenes perceived through microsaccades. This paper demonstrates the complementarity of event-based vision sensors and active perception leading to more biologically inspired robots.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call