Abstract

This paper describes the preliminary stages in the development of a predictive feed-forward (PFF) stereo based tracking module. The object of the module is to exploit the spatio-temporal coherence that exists in a sequence of stereo images in the context of providing a visual control mechanism for a mobile vehicle with uncertainty in position. PFF provides a method by which the representation of a 3D scene can be maintained and evolved over time. Furthermore, quickening strategies can utilise the spatio-temporal coherence by exploiting previously obtained depth values and approximate trajectory information in order to accelerate the process that actually achieves the stereo correspondences. Much research in computer vision has been developed in snapshot mode, concentrating attention on a single image or a small number of frames obtained either in synchrony or from a short movie sequence, the computational overheads involved in the analysis of a long sequence of images having proved prohibitive. It has been assumed, albeit implicitly, that algorithms developed in this way would, given appropriate parallel computer architectures, eventually be able to perform in real time on continuous image sequences. That is, the whole process involved in the recovery of scene descriptions would begin afresh upon each image frame and that some [not fully defined] extra module would be responsible for maintaining an evolving model of the environment based upon these descriptions. In the context of the control of a mobile robotic vehicle it is important to distinguish between the twin goals of obtaining an accurate model of the environment and determining the current position within it. Hence dynamic vision can be decomposed into [at least] two important modules: 1) The maintenance of an accurate and, as far as possible, topologically complete scene model. This will include: the combination of multiple views [1,2] to give more complete and robust data; the identification of and inclusion of novel (not previously seen) features; the determination of free space within which the robot can move. 2) The use of visual tracking to provide the control signal required to navigate an autonomous vehicle through an unstructured/partially structured environment using as beacons a subset of the scene features (perhaps identified outside the tracking module itself). The visual through-put and temporal response required by each task is very different. For example, when using visual feed back as a control mechanism it will be necessary to provide a much higher sample rate than that at which the model of the environment needs to be updated. Furthermore, the actual elapsed time

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.