Abstract

Light intensities (photons s–1 μm–2) in a natural scene vary over several orders of magnitude from shady woods to direct sunlight. A major challenge facing the visual system is how to map such a large dynamic input range into its limited output range, so that a signal is neither buried in noise in darkness nor saturated in brightness. A fly photoreceptor has achieved such a large dynamic range; it can encode intensity changes from single to billions of photons, outperforming man‐made light sensors. This performance requires powerful light adaptation, the neural implementation of which has only become clear recently. A computational fly photoreceptor model, which mimics the real phototransduction processes, has elucidated how light adaptation happens dynamically through stochastic adaptive quantal information sampling. A Drosophila R1–R6 photoreceptor's light sensor, the rhabdomere, has 30,000 microvilli, each of which stochastically samples incoming photons. Each microvillus employs a full G‐protein‐coupled receptor signalling pathway to adaptively transduce photons into quantum bumps (QBs, or samples). QBs then sum the macroscopic photoreceptor responses, governed by four quantal sampling factors (limitations): (i) the number of photon sampling units in the cell structure (microvilli), (ii) sample size (QB waveform), (iii) latency distribution (time delay between photon arrival and emergence of a QB), and (iv) refractory period distribution (time for a microvillus to recover after a QB). Here, we review how these factors jointly orchestrate light adaptation over a large dynamic range.

Highlights

  • Vision starts from phototransduction; a photoreceptor absorbs photons from the environment and transduces them into its electrical signals

  • (iii) Summation Model: quantum bump (QB) from 30,000 microvilli integrate to the macroscopic light-induced current (LIC) response

  • Similar to what is seen in real photoreceptor outputs (Faivre & Juusola, 2008; Zheng et al 2009), simulations have shown that variable QBs from a large microvillus population sum up largely invariable response waveforms to naturalistic stimuli at different illumination conditions (Song et al 2012)

Read more

Summary

Introduction

Vision starts from phototransduction; a photoreceptor absorbs photons (input) from the environment and transduces them into its electrical signals (output). Mechanistic understanding of how adaptation dynamics happen at the photoreceptor level would be important for making the generation biomimetic light sensors, and a computational modelling approach can help in this task. To maximize sensory information transfer, an optimal filter should change from a low-pass integrator to a band-pass differentiator with increasing stimulus signal-to-noise ratio (van Hateren, 1997) Whilst such filtering performance corresponds well with the adaptive trends in sensory-neural signalling, the real neural outputs are more sophisticated, as they adapt continuously and near instantaneously to the temporal structure of stimuli. To study the emergent properties of complex adaptive systems, such as living cells, it seems better to use bottom-up biomimetic approaches, whereupon a computational virtual cell model is constructed to replicate its real counterpart’s ultrastructure and signalling We recently constructed such a virtual Drosophila R1–R6 photoreceptor cell. (ii) Stochastic Bump Model (Fig. 2B): stochastic biochemical reactions inside a microvillus transduce the absorbed photon sequences to QB sequences (Pumir et al 2008; Song et al 2012)

D Light i Poisson Photon
A Bright
Conclusions and remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call