Abstract

In this article, we present a new algorithm for fast, online 3D reconstruction of dynamic scenes using times of arrival of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon lidar in practical applications is the presence of strong ambient illumination which corrupts the data and can jeopardize the detection of peaks/surface in the signals. This background noise not only complicates the observation model classically used for 3D reconstruction but also the estimation procedure which requires iterative methods. In this work, we consider a new similarity measure for robust depth estimation, which allows us to use a simple observation model and a non-iterative estimation procedure while being robust to mis-specification of the background illumination model. This choice leads to a computationally attractive depth estimation procedure without significant degradation of the reconstruction performance. This new depth estimation procedure is coupled with a spatio-temporal model to capture the natural correlation between neighboring pixels and successive frames for dynamic scene analysis. The resulting online inference process is scalable and well suited for parallel implementation. The benefits of the proposed method are demonstrated through a series of experiments conducted with simulated and real single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m observed under extreme ambient illumination conditions.

Highlights

  • F AST and reliable reconstruction of 3D scenes using single-photon light detection and ranging is extremely important for a variety of applications, including environmental monitoring [1], [2], autonomous driving [3] and defence [4], [5]

  • Since single-photon lidar technology consists of illuminating the scene with a pulsed laser and analyzing the time of arrival (ToA) of reflected photons, successful reconstruction from a few return photons enables the consideration of shorter integration/acquisition times and the analysis of highly dynamic scenes

  • We considered a dynamic scene which consists of two people, standing approximately 1.5 metres apart, exchanging a ≈ 220 mm diameter ball at a distance of 320 metres from the lidar system

Read more

Summary

INTRODUCTION

F AST and reliable reconstruction of 3D scenes using single-photon light detection and ranging (lidar) is extremely important for a variety of applications, including environmental monitoring [1], [2], autonomous driving [3] and defence [4], [5]. In [30], we proposed an online reconstruction method, relying on individual photon-detection events, e.g., binary frames, which was used to reconstruct sequentially a series of depth images using at most one photon per pixel and frame This sequential approach leverages correlations between successive time frames, the method does not allow the analysis of histograms (i.e., reconstruction after several illumination periods), as the estimation is performed after each illumination period. A new online/sequential estimation strategy, proposed to the best of our knowledge for the first time, for reconstruction of dynamic 3D scenes from streams of photon detection events This method based on assumed density filtering is highly scalable and computationally attractive.

Observation Models
Robust Estimation Using β-Divergences
Pseudo-Bayesian Estimation
Preliminary Comparative Study
Niter pd
Target Detection and Additional Parameter Estimation
Approximation Using Assumed Density Filtering
Online Target Detection
RESULTS
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.