Abstract

Determining the spatial motion of a moving camera from a video is a classical problem in computer vision. The difficulty of this problem is that the flow pattern directly observable in the video is generally not the complete flow pattern induced by the motion, but only the partial information of it, which is known as the normal flow. In this paper, we present a direct method which neither requires the establishment of feature correspondences nor the recovery of optical flow between two image frames, but we directly utilize all observable normal flow data to recover the camera motion. We propose a two-stage iterative algorithm to search the solution in the motion space in a coarse-to-fine framework. The first stage involves the use of the direction part of the normal flow. Each of these normal flow data can provide a constrained solution space to the direction of motion. The intersection of the motion solutions from all the available normal flow data can reduce the motion ambiguity to a certain extent. We then use the globality of the rotational magnitude to all image positions to constrain the motion parameters further. Once the camera motion is determined, the depth map of the imaged scene (up to an arbitrary scale) can be recovered. Experimental results on synthetic data and real images are provided to reveal the performance of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call