Abstract

The development of advanced optoelectronic vision sensors for high-level image recognition and data preprocessing is poised to accelerate the progress of machine vision and mobile electronic technology. Compared to traditional sensory computing methods, such as analog-to-digital signal conversion and digital logic computation tasks (i.e., Von Neumann computing), neural morphological vision computing can significantly improve energy efficiency and data processing speed by minimizing unnecessary raw data transmission between front-end photosensitive sensors and back-end processors. Neural morphological vision sensors are typically designed for tasks such as denoising, edge enhancement, spectral filtering, and visual information recognition. These methods can be categorized into approaches using near-sensor and sensor-internal computing processors based on whether preprocessing can be performed in situ. In near-sensor computing approaches, the image sensor for capturing visual information and the memory computing processor for preprocessing captured images are separate. A memory computing processor can simultaneously perform memory and computing tasks based on analog memory functions. Neural morphological vision sensors for in-sensor computing can be constructed using single-element image sensors, enabling both the reception of visual information and the execution of memory computing processes to be achieved in the same device. This represents an ideal scenario for future artificial intelligence machines and mobile electronic devices in visual computing systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call