Abstract

This paper presents a scene flow estimation method which functions by depth map upsampling and layer assignment for the camera-LiDAR (Light Detection And Ranging) system. The 3D geometry and motion of the observed scene are estimated simultaneously based on two consecutive frames from a camera and a LiDAR. The proposed technique begins with dense depth map upsamling guided by a corresponding RGB image. The scene is then classified to various moving layers by a hybrid method. Finally, the motion of each layer is constrained by the RGB image and depth image which provide a coarse 3D rigid motion. Experimental results on both public datasets and a real-word platform demonstrate the effectiveness of this technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call