Abstract
There is much evidence to support the validity of the use of optical flow as a basis for many computational vision tasks. However, optical flow methods are problematic due to inherent errors in the computational methods involved in the processing. Many of the problems associated with the analysis of image flow can be alleviated if information extracted from a long sequences of image is utilized as a basis for the derivation of the flow information, rather than simple between-frame processing. We present discussion of work undertaken to study implementation of a method to provide the necessary long-sequence information. Using an algorithm providing results similar to those achieved in multi-image, overlay photography, we combine features from several sequential images, with each pair of successive images separated by a short time interval. Features are then associated according to the object features that generated them, and the resulting point lists are then analyzed to determine the long-sequence flow information. We apply flow-based computational vision methods (previously implemented using only simple flow) to the long-sequence flow. Since the long-sequence flow overcomes many of the effects of noise and quantization errors, the results is the derivation of robust visual information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.