Abstract

The estimation of motion of a moving object from a sequence of images is of prime interest in computer vision. This paper reviews the different approaches developed to estimate motion parameters from a sequence of two range images. We give the mathematical formulation of the problem along with the various modifications by different investigators to adapt the formulation to their algorithms. The shortcomings and the advantages of each method are also briefly mentioned. The methods are divided according to the type of feature used in the motion estimation task. We address the representational and the computational issues for each of the methods described. Most of the earlier approaches used local features such as corners (points) or edges (lines) to obtain the transformation. Local features are sensitive to noise and quantization errors. This causes uncertainties in the motion estimation. Using global features, such as surfaces, makes the procedure of motion computation more robust at the expense of making the procedure very complex. A common error is assuming that the best affine transform is the best estimate of the desired motion, which in general is false. It is important to make the distinction between the motion transform and the general affine transform, since an affine transform may not be realized physically by a rigid object.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call