Abstract

Contrast maximization is an event camera application that can estimate angular velocity, depth, and optical-flow using a subset of events observed in a temporal window. In the estimation of rotational motion, we can compute the angular position by integrating the angular velocity. However, the accumulation of drift error degrades the accuracy of motion estimation. If the contrast maximization framework utilizes events measured before the temporal window, the performance of the framework will be improved, including the alleviation of drift error in motion estimation. In this work, we utilize the globally aligned event data and propose the rotational position and velocity estimation method using an event camera only. The proposed algorithm not only maximizes contrast of an image of events in a single temporal window but also maximizes the contrast image of events observed over time. Our algorithm works in real-time by reducing additional computations of the existing contrast maximization. We confirm the real-time operation with a single-core CPU on a laptop and show that the maximum error is within 3 degrees on public data sets and acquired real-world data sets. To contribute to the community, we provide the source code and the real-world data sets to the public.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call