Abstract

A moving visual figure may contain first-order signals defined by variation in mean luminance, as well as second-order signals defined by constant mean luminance and variation in luminance envelope, or higher-order signals that cannot be estimated by taking higher moments of the luminance distribution. Separating these properties of a moving figure to experimentally probe the visual subsystems that encode them is technically challenging and has resulted in debated mechanisms of visual object detection by flies. Our prior work took a white noise systems identification approach using a commercially available electronic display system to characterize the spatial variation in the temporal dynamics of two distinct subsystems for first- and higher-order components of visual figure tracking. The method relied on the use of single pixel displacements of two visual stimuli according to two binary maximum length shift register sequences (m-sequences) and cross-correlation of each m-sequence with time-varying flight steering measurements. The resultant spatio-temporal action fields represent temporal impulse responses parameterized by the azimuthal location of the visual figure, one STAF for first-order and another for higher-order components of compound stimuli. Here we review m-sequence and reverse correlation procedures, then describe our application in detail, provide Matlab code, validate the STAFs, and demonstrate the utility and robustness of STAFs by predicting the results of other published experimental procedures. This method has demonstrated how two relatively modest innovations on classical white noise analysis—the inclusion of space as a way to organize response kernels and the use of linear decoupling to measure the response to two channels of visual information simultaneously—could substantially improve our basic understanding of visual processing in the fly.

Highlights

  • Visual figure detection is a central capability demonstrated by sophisticated visual systems, including those of flies (Reichardt and Wenking, 1969; Reichardt and Poggio, 1976)

  • The figure-centering fixation response in Drosophila is clearly seen for figures corresponding to actual physical objects– i.e., those in which the motion of any internal luminance patterns corresponds to the motion of the mean luminance distribution defining the object itself— but in addition to such first-order or Fourier motion, flies track figures defined by the envelope of mean luminance and figures that are defined by higher-order properties that do not correspond to or do not contain first-order signals (Theobald et al, 2008; Aptekar et al, 2012; Zhang et al, 2013)

  • When the motion of such a figure is driven by a single white noise sequence, the spatial and temporal characteristics of the turning reactions and the derived spatiotemporal action field (STAF) are nearly identical to those of the FM-STAFs obtained from the original figures containing uncorrelated elementary motion (EM)

Read more

Summary

Introduction

Visual figure detection is a central capability demonstrated by sophisticated visual systems, including those of flies (Reichardt and Wenking, 1969; Reichardt and Poggio, 1976). In some species, this capability extends even to tracking targets that subtend less than one ommatidial facet, and fall below classical detection limits (O’Carroll and Wiederman, 2014). This capability extends even to tracking targets that subtend less than one ommatidial facet, and fall below classical detection limits (O’Carroll and Wiederman, 2014) Such sensitivity implies that figure tracking capitalizes on highly specialized neural mechanisms. Progress has been made in understanding phenomenological aspects of figure detection in flies, its computational basis is still largely unexplained—as are the ways in which it relates to the various other perceptual modes of vision, and how they all are transformed and recombined or selected to produce calibrated motor commands for control of visual orientation

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call