ABSTRACTThis paper introduces an image formation technique that realizes automatic adaptation to the illumination conditions during image capture. This adaptation takes place asynchronously in a single shot, requiring neither external control nor iterations on the exposure setting. These characteristics are especially suitable for applications such as visual robot navigation, where illumination may abruptly change and immediate decisions must be made. The equations that model the proposed technique show exposure‐time independence. Moreover, each pixel adapts itself according to the ratio between local and global illuminations. The dependence on this ratio—which is a relative rather than absolute magnitude—along with the aforementioned independence on exposure time means that, theoretically, any illumination scenario can be represented within an arbitrary signal range. We also address the CMOS implementation of this technique. An extensive set of electrical simulations confirms its auto‐exposure and high‐dynamic‐range encoding capabilities. In particular, we simulated the capture of a 118‐dB scene on a ‐px array over five orders of magnitude of ambient illumination. Corner and mismatch simulations reveal strong robustness of the proposed circuitry against fabrication non‐idealities. Finally, although we aim to applications in which large image resolution is not a critical specification, strategies to minimize the impact of the required focal‐plane circuit elements on the pixel pitch are examined.
Read full abstract