Abstract

This paper is concerned with the segmentation of scene objects on the basis of their unique uniform motions. A number of previous approaches have been founded upon greyscale spatio-temporal gradient based estimation of the optic flow; these have shown some success. However optical flow only permits a limited range of recoverable motion displacements and exhibits a relatively low robustness to noise. Multiresolution image data can be used to increase the range of allowed motion displacements but the correct resolution at which to compute motion estimates is difficult to determine. It is postulated that with a priori knowledge of the elementary motions arising from the dynamic scene, the resolution level of a multiresolution support can be automatically set. These elementary motions may be used to increase noise robustness by permitting a relative rather than absolute classification of motion. We present a multi-stage algorithm in which feature correspondences are used to create a dictionary of elementary motions arising from the scene. The scene is initially segmented into small primitive regions using a maximum a posteriori (MAP) criterion in conjunction with a Markov random field (MRF) model and the motion dictionary. An affine motion model and maximum likelihood (ML) criterion are then used to fuse primitive regions of coherent motion into the full scene segmentation. Results for both synthetic and real imagery are given which demonstrate that scene segmentation may be performed across a wide range of motion displacements and at high levels of additive noise.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.