Abstract

Standardized benchmarks in Computer Vision have greatly contributed to the advance of approaches to many problems in the field. If we want to enhance the visibility of event-driven vision and increase its impact, we will need benchmarks that allow comparison among different neuromorphic methods as well as comparison to Computer Vision conventional approaches. We present datasets to evaluate the accuracy of frame-free and frame-based approaches for tasks of visual navigation. Similar to conventional Computer Vision datasets, we provide synthetic and real scenes, with the synthetic data created with graphics packages, and the real data recorded using a mobile robotic platform carrying a dynamic and active pixel vision sensor (DAVIS) and an RGB+Depth sensor. For both datasets the cameras move with a rigid motion in a static scene, and the data includes the images, events, optic flow, 3D camera motion, and the depth of the scene, along with calibration procedures. Finally, we also provide simulated event data generated synthetically from well-known frame-based optical flow datasets.

Highlights

  • Asynchronous frame-free vision sensors have gained popularity among vision researchers in recent years

  • We provide the raw data along with the 3D motion and the scene geometry, and this data allows for evaluating algorithms concerned with the classic structure from motion problems of image motion estimation, 3D motion estimation, reconstruction, and segmentation by depth

  • In addition to the data, we provide the code for the calibration of the dynamic and active pixel vision sensor (DAVIS) sensor with respect to the RGB-D sensor, and the calibration between the robotic platform and the DAVIS sensor

Read more

Summary

Introduction

Asynchronous frame-free vision sensors have gained popularity among vision researchers in recent years. The most prominent of these sensors are the temporal change threshold imager (Mallik et al, 2005), the DVS (Lichtsteiner et al, 2008), the ATIS (Posch et al, 2011), and the DAVIS (Brandli et al, 2014). Inspiration for their design comes from the transient pathway of primate vision, which processes information due to luminance changes in the scene (Lichtsteiner et al, 2008; Liu et al, 2015). We provide the raw data along with the 3D motion and the scene geometry, and this data allows for evaluating algorithms concerned with the classic structure from motion problems of image motion estimation, 3D motion estimation, reconstruction, and segmentation by depth

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.