Abstract

We investigated four models for estimating time-to-contact (TTC) from retinal flow. Lee's model can deal with sparse flow but fails if the flow contains a rotational component. Koenderink's model, based on div, can deal with rotation but fails if the flow is sparse or if the world does not vary coherently in depth. Two new models were developed by representing retinal flow as the sum of an expansion and a rotation component. The first operates on pairs of points and can deal with sparse flow but fails if the world does not vary coherently in depth. Uniquely, this model provides TTC estimates without prior knowledge of either the focus of expansion (FOE) or focus of rotation (FOR). The second model estimates both the FOE and the FOR and then operates on a point-by-point basis. This model can deal with incoherent depth variations. We compared human performance with these different model properties by requiring subjects to estimate FOE and TTC from random-dot kinematograms. We used kinematograms depicting smooth planes and random 3-D clouds of points, and systematically varied the density of the flow. Performance was not substantially reduced by sparse flow or by incoherent depth, which argues against Koenderink's and the first of our own models. Performance remained good when rotation was added to the flow, which argues against Lee's model. Overall, the data favour a model that first decomposes flow into expansion and rotation components and then estimates TTC on a point-by-point basis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.