Abstract

In this paper, we present a novel method to estimate the dense scene flow from the aligned depth map and color image using variational framework. For scene flow estimation, most scenes can be seen as scenes composed by independent 3-D rigid parts, so we apply 3-D local rigidity assumption to data term as a fidelity measure for each pixel. Meanwhile, in order to improve the accuracy of scene flow estimation at the boundaries of motion, we assume that depth map and color image are aligned and utilize the boundaries information of depth map to yield smoothness term which is weighted by a depth map driven anisotropic diffusion tensor. In addition, an efficient numerical algorithm named primal-dual algorithm is implemented for the variational formulation of scene flow estimation. Our method is tested on the Middlebury data sets, and the real-world scene data set captured by KINECT. Experimental results show that our method can receive dense and accurate scene flow and preserve motion boundaries well.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.