Abstract

Depth sensing is an important problem in many applications such as autonomous driving, robotics, and automation. This paper presents an adaptive active fusion method for scene depth estimation by using an RGB camera and a single-point LiDAR. An active scanning mechanism is proposed to guide laser scanning based on critical visual and saliency features, and the convolutional spatial propagation network (CSPN) is designed to generate and refine full depth map from the sparse depth scans. The active scanning mechanism generates a depth mask by using log-spectrum saliency detection, Canny edge detection, and uniform sampling, which indicate critical regions that require a high resolution of laser scanning. To reconstruct a full depth map, the designed CSPN network extracts affinity matrices from the sparse depth scans, while reserving global spatial information in the RGB images. The performance of proposed method was evaluated and compared with the state-of-the-art methods on the NYUv2 dataset, and the experiment demonstrated its outperformance in reconstruction accuracy and robustness to measurement noise. The proposed method was also evaluated in real-world scenarios.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call