Abstract
We compare event-cameras with fast (global shutter) frame-cameras experimentally, asking: “What is the application domain, in which an event-camera surpasses a fast frame-camera?” Surprisingly, finding the answer has been difficult. Our methodology was to test event- and frame-cameras on generic computer vision tasks where event-camera advantages should manifest. We used two methods: (1) a controlled, cheap, and easily reproducible experiment (observing a marker on a rotating disk at varying speeds); (2) selecting one challenging practical ballistic experiment (observing a flying bullet having a ground truth provided by an ultra-high-speed expensive frame-camera). The experimental results include sampling/detection rates and position estimation errors as functions of illuminance and motion speed; and the minimum pixel latency of two commercial state-of-the-art event-cameras (ATIS, DVS240). Event-cameras respond more slowly to positive than to negative large and sudden contrast changes. They outperformed a frame-camera in bandwidth efficiency in all our experiments. Both camera types provide comparable position estimation accuracy. The better event-camera was limited by pixel latency when tracking small objects, resulting in motion blur effects. Sensor bandwidth limited the event-camera in object recognition. However, future generations of event-cameras might alleviate bandwidth limitations.
Highlights
Introduction and Task FormulationThis research extending our previous work [1] was motivated by our attempts to use an event-camera in robotics for interactive perception
When considering using event-cameras, one may ask questions such as “What is the application domain, in which an event-camera surpasses a fast frame-camera?” or “Which scene conditions can be detrimental to event-camera performance?”
Event-cameras’ performance was limited by pixel latency when tracking small objects and by readout bandwidth in object recognition
Summary
This research extending our previous work [1] was motivated by our attempts to use an event-camera in robotics for interactive perception. Our practical experience with state-of-theart event-cameras was inferior to that expected, and this work provides our explanation of why. Event-cameras, known as Dynamic Vision Sensors (DVS), have been popular among academic researchers for about ten years. Independent pixels of event-cameras [2,3] generate asynchronous events in response to local logarithmic intensity changes. Each time the difference passes a preset threshold, the pixel emits a change detection (CD) event and resets its brightness reference to the current brightness. A CD event is characterized by its pixel coordinates, its precise timestamp in microsecond resolution, and the polarity of the brightness change. The advantages of event cameras over traditional cameras include lower sensor latency, higher temporal resolution, higher dynamic range (120 dB+ vs. ∼60 dB of traditional cameras), implicit data compression, and lower power consumption
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.