Abstract

Flying insects, such as flies or bees, rely on consistent information regarding the depth structure of the environment when performing their flight maneuvers in cluttered natural environments. These behaviors include avoiding collisions, approaching targets or spatial navigation. Insects are thought to obtain depth information visually from the retinal image displacements (“optic flow”) during translational ego-motion. Optic flow in the insect visual system is processed by a mechanism that can be modeled by correlation-type elementary motion detectors (EMDs). However, it is still an open question how spatial information can be extracted reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially if the vast range of light intensities encountered in natural environments is taken into account. This question will be addressed here by systematically modeling the peripheral visual system of flies, including various adaptive mechanisms. Different model variants of the peripheral visual system were stimulated with image sequences that mimic the panoramic visual input during translational ego-motion in various natural environments, and the resulting peripheral signals were fed into an array of EMDs. We characterized the influence of each peripheral computational unit on the representation of spatial information in the EMD responses. Our model simulations reveal that information about the overall light level needs to be eliminated from the EMD input as is accomplished under light-adapted conditions in the insect peripheral visual system. The response characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness of objects and, especially, of their contours. We furthermore show that local brightness adaptation of photoreceptors allows for spatial vision under a wide range of dynamic light conditions.

Highlights

  • Animals have to acquire and process sensory information about the spatial layout of the environment to be able to navigate successfully in cluttered environments

  • One challenge of animal visual systems is to extract behaviorally relevant information consistently in natural environments, irrespective of the strong variations of light levels. It has not yet been understood how this is accomplished regarding the spatial vision of aerial insects, which is assumed to be based on optic flow information during translational flight segments (Egelhaaf et al, 2012, 2014)

  • Our aim was to understand how information regarding the spatial layout of natural environments can be consistently extracted under a wide range of dynamic brightness conditions

Read more

Summary

Introduction

Animals have to acquire and process sensory information about the spatial layout of the environment to be able to navigate successfully in cluttered environments. In order to obtain information about the spatial layout of the environment, these animals can use Peripheral Processing Facilitates Depth Vision the visual image displacements on the retina (“optic flow”) induced during ego-motion (Egelhaaf et al, 2012). Birds, employ saccadic flight and gaze strategies which largely separate translational from rotational ego-motion; these facilitate spatial vision during the intersaccadic intervals between the brief and rapid saccadic turns (Schilstra and Hateren, 1999; Hateren and Schilstra, 1999; Eckmeier et al, 2008; Mronz and Lehmann, 2008; Boeddeker et al, 2010; Braun et al, 2010, 2012; Egelhaaf et al, 2012, 2014; Kress et al, 2015; Muijres et al, 2015)

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call