Abstract

Event cameras are novel neuromorphic vision sensors with ultrahigh temporal resolution and low latency, both in the order of microseconds. Instead of image frames, event cameras generate an asynchronous event stream of per-pixel intensity changes with precise timestamps. The resulting sparse data structure impedes applying many conventional computer vision techniques to event streams, and specific algorithms should be designed to leverage the information provided by event cameras. In our work, a motion-and scene-adaptive time threshold for event data is proposed. As a parameter describing the global characteristic of the event stream, this time threshold can be used for low-level visual tasks such as event denoising and feature extraction. Based on this threshold, the normalization method for the surface of active events (SAE) is explored from a new perspective. Differently from the previous speed invariant time surface, this normalized SAE is constructed by adaptive exponential decay (AED-SAE) and can be directly applied to the event-based Harris corner detector. The proposed corner detector is evaluated on real and synthetic datasets with different resolutions. The proposed algorithm exhibits higher accuracy than congeneric algorithms and maintains high computational efficiency on datasets with different resolutions and texture levels.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call