Light intensities (photons s–1 μm–2) in a natural scene vary over several orders of magnitude from shady woods to direct sunlight. A major challenge facing the visual system is how to map such a large dynamic input range into its limited output range, so that a signal is neither buried in noise in darkness nor saturated in brightness. A fly photoreceptor has achieved such a large dynamic range; it can encode intensity changes from single to billions of photons, outperforming man‐made light sensors. This performance requires powerful light adaptation, the neural implementation of which has only become clear recently. A computational fly photoreceptor model, which mimics the real phototransduction processes, has elucidated how light adaptation happens dynamically through stochastic adaptive quantal information sampling. A Drosophila R1–R6 photoreceptor's light sensor, the rhabdomere, has 30,000 microvilli, each of which stochastically samples incoming photons. Each microvillus employs a full G‐protein‐coupled receptor signalling pathway to adaptively transduce photons into quantum bumps (QBs, or samples). QBs then sum the macroscopic photoreceptor responses, governed by four quantal sampling factors (limitations): (i) the number of photon sampling units in the cell structure (microvilli), (ii) sample size (QB waveform), (iii) latency distribution (time delay between photon arrival and emergence of a QB), and (iv) refractory period distribution (time for a microvillus to recover after a QB). Here, we review how these factors jointly orchestrate light adaptation over a large dynamic range.