Depth-map estimation reflects the geometry of the visible surface in the environment directly and plays an important role in perception and decision for intelligent robots. However, sparse LiDAR only provides low-resolution depth information, which is a huge challenge for accurate sensing algorithms. To address this problem, this article proposes a novel fusion framework to generate dense depth-map based on event camera and sparse LiDAR. The approach uses the geometric information provided by the point cloud as prior knowledge and clusters point cloud data by an improved density clustering algorithm. Combined with the 3-D surface model of each cluster, the approach can provide 3-D reconstructions of the coordinate points of events and further obtain the dense-depth map by depth expansion and hole filling. Finally, we deploy our approach in MVSEC datasets and real-world applications. Experimental results show that, compared with other approaches, our approach can obtain more accurate depth information.
Read full abstract