Abstract
Depth-map estimation reflects the geometry of the visible surface in the environment directly and plays an important role in perception and decision for intelligent robots. However, sparse LiDAR only provides low-resolution depth information, which is a huge challenge for accurate sensing algorithms. To address this problem, this article proposes a novel fusion framework to generate dense depth-map based on event camera and sparse LiDAR. The approach uses the geometric information provided by the point cloud as prior knowledge and clusters point cloud data by an improved density clustering algorithm. Combined with the 3-D surface model of each cluster, the approach can provide 3-D reconstructions of the coordinate points of events and further obtain the dense-depth map by depth expansion and hole filling. Finally, we deploy our approach in MVSEC datasets and real-world applications. Experimental results show that, compared with other approaches, our approach can obtain more accurate depth information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Instrumentation and Measurement
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.