Abstract

We developed a model of the input circuitry of the FD1 cell, an identified motion-sensitive interneuron in the blowfly's visual system. The model circuit successfully reproduces the FD1 cell's most conspicuous property: its larger responses to objects than to spatially extended patterns. The model circuit also mimics the time-dependent responses of FD1 to dynamically complex naturalistic stimuli, shaped by the blowfly's saccadic flight and gaze strategy: the FD1 responses are enhanced when, as a consequence of self-motion, a nearby object crosses the receptive field during intersaccadic intervals. Moreover, the model predicts that these object-induced responses are superimposed by pronounced pattern-dependent fluctuations during movements on virtual test flights in a three-dimensional environment with systematic modifications of the environmental patterns. Hence, the FD1 cell is predicted to detect not unambiguously objects defined by the spatial layout of the environment, but to be also sensitive to objects distinguished by textural features. These ambiguous detection abilities suggest an encoding of information about objects—irrespective of the features by which the objects are defined—by a population of cells, with the FD1 cell presumably playing a prominent role in such an ensemble.

Highlights

  • Retinal image displacements are elicited when a moving object crosses the visual field (“object motion”)

  • The model was stimulated with sections of two optic flow sequences, as seen on the natural flight trajectory in a small flight arena with and without object, respectively

  • The last two sections were based on the same section of the flight trajectory, with the optic flow determined for the large flight arena

Read more

Summary

Introduction

Retinal image displacements are elicited when a moving object crosses the visual field (“object motion”). Several insect species, ranging from flies to bees and hawkmoths, have been shown to use relative motion very efficiently to detect objects, to infer their distance and to respond to them adequately in different contexts, ranging from landing to spatial navigation. Thereby, they mainly use relative motion information at the edges of objects (Lehrer et al, 1988; Srinivasan et al, 1989; Lehrer and Srinivasan, 1993; Kimmerle et al, 1996; Kern et al, 1997; Kimmerle and Egelhaaf, 2000a; Dittmar et al, 2010)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.