Abstract

Event cameras are bio-inspired sensors that have a high dynamic range and temporal resolution. This property enables motion estimation from textures with repeating patterns, which is difficult to achieve with RGB cameras. Therefore, motion estimation of an event camera is expected to be applied to vehicle position estimation. An existing method, called contrast maximization, is one of the methods that can be used for event camera motion estimation by capturing road surfaces. However, contrast maximization tends to fall into a local solution when estimating three-dimensional motion, which makes correct estimation difficult. To solve this problem, we propose a method for motion estimation by optimizing contrast in the bird’s-eye view space. Instead of performing three-dimensional motion estimation, we reduced the dimensionality to two-dimensional motion estimation by transforming the event data to a bird’s-eye view using homography calculated from the event camera position. This transformation mitigates the problem of the loss function becoming non-convex, which occurs in conventional methods. As a quantitative experiment, we created event data by using a car simulator and evaluated our motion estimation method, showing an improvement in accuracy and speed. In addition, we conducted estimation from real event data and evaluated the results qualitatively, showing an improvement in accuracy.

Highlights

  • Camera motion estimation is one of the most important technologies in many applications, such as automatic driving and assistive technologies

  • 0T 1 where θ = (ω, v x, vy ) are the model parameters shared by all the pixels, R = exp(∆tk ω ) is the 2 × 2 rotation matrix where exp is the exponential map, and t = ∆tk v is the translation. The novelty of this method is that we empirically found that the bird’s-eye view transformation makes the loss function convex around the true value, which leads to fast and accurate motion estimation

  • We clearly showed that when using contrast maximization and entropy minimization to perform homographic motion estimation for a plane, the loss function has multiple extremes and the motion estimation result deviates from the true value

Read more

Summary

Introduction

Camera motion estimation is one of the most important technologies in many applications, such as automatic driving and assistive technologies. Motion estimation by pointing a camera at a road surface has an advantage in that objects that can affect it, such as automobiles and pedestrians, are usually not visible [1,2,3,4,5,6]. Proposed a method to estimate the position and orientation of a vehicle from sequential images of the ground using the known homography obtained from a normal camera. Saurer et al [2] proposed a method to find different minimal solutions for egomotion estimation of a camera based on homography knowing the gravity vector between images. Gilles et al [3] proposed a method based on unsupervised deep learning for motion estimation with a downward-looking camera. In spite of active research, camera motion estimation by using the road surface has the following problems: it is difficult to extract and match feature points robustly from noisy ground textures; the high-speed movement of cameras causes motion blur; it often involves challenging illumination conditions

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.