Abstract

We propose a non-learning depth completion method for a sparse depth map captured using a light detection and ranging (LiDAR) sensor guided by a pair of stereo images. Generally, conventional stereo-aided depth completion methods have two limiations. (i) They assume the given sparse depth map is accurately aligned to the input image, whereas the alignment is difficult to achieve in practice. (ii) They have limited accuracy in the long range because the depth is estimated by pixel disparity. To solve the abovementioned limitations, we propose selective stereo matching (SSM) that searches the most appropriate depth value for each image pixel from its neighborly projected LiDAR points based on an energy minimization framework. This depth selection approach can handle any type of mis-projection. Moreover, SSM has an advantage in terms of long-range depth accuracy because it directly uses the LiDAR measurement rather than the depth acquired from the stereo. SSM is a discrete process; thus, we apply variational smoothing with binary anisotropic diffusion tensor (B-ADT) to generate a continuous depth map while preserving depth discontinuity across object boundaries. Experimentally, compared with the previous state-of-the-art stereo-aided depth completion, the proposed method reduced the mean absolute error (MAE) of the depth estimation to 0.65 times and demonstrated approximately twice more accurate estimation in the long range. Moreover, under various LiDAR-camera calibration errors, the proposed method reduced the depth estimation MAE to 0.34-0.93 times from previous depth completion methods.

Highlights

  • Depth measurement is conducted in several ways such as time-of-flight (ToF), stereo cameras, and structured light projection [1]

  • We examined the robustness of the proposed method against LiDARcamera extrinsic calibration errors

  • PARAMETER STUDY We evaluated the effect on mean absolute error (MAE) of the value of r using the KITTI dataset (Section IV-A), the KITTI dataset with the blueprint condition, and the Komaba dataset with the lines64 condition (Section IV-B)

Read more

Summary

Introduction

Depth measurement is conducted in several ways such as time-of-flight (ToF), stereo cameras, and structured light projection [1]. Stereo cameras and structured light projection estimate depth by pixel disparity. Their precision dramatically reduces as the distance increases since a small disparity change indicates a large depth change in the long range. ToF sensors have a higher precision in the long range. Among ToF sensors, light detection and ranging (LiDAR) is used in various systems that require adaptability to dynamic environments, e.g., automated driving and robots, because of its active sensing capability and robustness to environmental changes. In terms of measurement density, LiDAR has a limitation because of the number of lasers in its array and the narrow beam measurement

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.