Abstract

Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of μs), very high dynamic range (140 dB versus 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.

Highlights

  • AND APPLICATIONS “T HE brain is imagination, and that was exciting to me; I wanted to build a chip that could imagine something1.” that is how Misha Mahowald, a graduate student at Caltech in 1986, started to work with Prof

  • This paper provides an overview of the bio-inspired technology of silicon retinas, or “event cameras”, such as [2], [3], [4], [5], with a focus on their application to solve classical as well as new computer vision and robotic tasks

  • The review [171] compared some early event-based optical flow methods [21], [92], [172], but only on flow fields generated by a rotating camera, i.e., lacking motion parallax and occlusion

Read more

Summary

Introduction

AND APPLICATIONS “T HE brain is imagination, and that was exciting to me; I wanted to build a chip that could imagine something1.” that is how Misha Mahowald, a graduate student at Caltech in 1986, started to work with Prof. Of Information Technology and Electrical Engineering, ETH Zurich, at the Inst. Of Neuroinformatics, University of Zurich and ETH Zurich, Zurich, Switzerland. Brian Taba is with IBM Research, CA, USA. Of Mechanical and Process Engineering, ETH Zurich, Switzerland. Kostas Daniilidis is with University of Pennsylvania, PA, USA. Of Informatics University of Zurich and Dept. Of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call