Abstract

This paper presents a novel vision chip architecture based on pixel-neighborhood level parallel processing. The architecture consists of neighborhoods of 8×8 digital pixel sensors, where each group of 8×8 sensors is physically embedded within its own neighborhood processing core on the same focal plane. To that end, a low complexity neighborhood processor architecture along with a general-purpose, 8-bit instruction set has been designed and implemented. This allows program execution to be carried out in parallel on a two-dimensional array of pixel-neighborhood processing cores, allowing for direct scalability in terms of resolution. A prototype vision chip housing an array of 8×10 neighborhoods with a 64×80 resolution has been designed and fabricated in a 0.13 μm fabrication process. The singlechip vision system can be programmed to perform a variety of image and video processing tasks. A number of image processing tasks are presented to demonstrate the functionality of pixel-neighborhood level parallelism.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.