Abstract

Benefiting from the extremely low latency, events have been used for Structured Light Imaging (SLI) to predict the depth surface. However, existing methods only focus on improving scanning speeds but neglect perturbations from event noise and timestamp jittering for depth estimation. In this paper, we build a hybrid SLI system equipped with an event camera, a high-resolution frame camera, and a digital light projector, where a single intensity frame is adopted as a guidance to enhance the event-based SLI quality. To achieve this end, we propose a Multi-Modal Feature Fusion Network (MFFN) consisting of a feature fusion module and an upscale module to simultaneously fuse events and a single intensity frame, suppress event perturbations, and reconstruct a high-quality depth surface. Further, for training MFFN, we build a new Structured Light Imaging based on Event and Frame cameras (EF-SLI) dataset collected from the hybrid SLI system, containing paired inputs composed of a set of synchronized events and one single corresponding frame, and ground-truth references obtained by a high-quality SLI approach. Experiments demonstrate that our proposed MFFN outperforms state-of-the-art event-based SLI approaches in terms of accuracy at different scanning speeds.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.