Abstract

Many kinds of vision systems are available on today’s market with various applications. Despite the wide variety of these applications, all digital cameras have the same basic functional components, which consist in photons collection, wavelength photons discrimination (filters), timing, control and drive electronics for the sensing elements, sample/hold operators, colours processing circuits, analogue to digital conversion and electronics interfaces (Fossum, 1997). Today, robotics and intelligent vehicles need sensors with fast response time, low energy consumption, able to extract high-level information from the environment (Muramatsu et al., 2002). Adding hardware computation operators near the sensor increases the computations potentiality and reduces inputs/outputs operations towards the central processor unit. The CCD technology have been the dominant tool for electronic image sensors during several decades due to their high photosensitivity, low fixed pattern noise, small pixel and large array sizes. However, in the last decade, CMOS image sensors have gained attention from many researchers and industries due to their low energy dissipation, low cost, on chip processing capabilities and their integration on standard or quasi-standard VLSI process. Still, raw output images acquired by CMOS sensors present poor quality for display and need further processing, mainly because of noise, blurriness and poor contrast. In order to tackle these problems, image-processing circuits are typically associated to image sensors as a part of the whole vision system. Usually, two areas coexist within the same chip for sensing and preprocessing that are implemented onto the same integrated circuit. To face the high data flow induced by the computer vision algorithms, an alternative approach consists in performing some image processing on the sensor focal plane. The integration of pixels array and image processing circuits on a single monolithic chip makes the system more compact and allows enhancing the behavior and the response of the sensor. Hence, to achieve some simple low-level image processing tasks (early-vision), a smart sensor integrates analogue and/or digital processing circuits in the pixel (Burns et al., 2003, El Gamal et al., 1999, Dudek, Hicks, 2000) or at the edge of the pixels array (Ni, Guan, 2000). Most often, such circuits are dedicated for specific applications. The energy dissipation is weak compared to that of the traditional approaches using multi chip (microprocessor,

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call