Abstract

Conventional machine vision systems have separate perception, memory, and processing architectures, which may exacerbate the increasing need for ultrahigh image processing rates and ultralow power consumption. In contrast, in-sensor visual computing performs signal processing at the pixel level using the collected analog signals directly, without sending data to other processors. Therefore, the in-sensor computing paradigm may hold the key to realizing extremely efficient and low power visual signal processing by integrating sensing, storage, and computation onto focal planes using either novel circuit designs or new materials. The focal-plane sensor-processor (FPSP), which is a typical in-sensor visual computing device, is a vision chip that has been developed for nearly 2 decades in domains such as image processing, computer vision, robotics, and neural networks. In contrast to conventional computer vision systems, the FPSP gives vision systems in-sensor image processing capabilities, thus decreasing system complexity, reducing power consumption, and enhancing information processing efficiency and security. Although many studies on in-sensor computing using the FPSP have been conducted since its invention, no thorough and systematic summary of these studies exists. This review explains the use of image processing algorithms, neural networks, and applications of in-sensor computing in the fields of machine vision and robotics. The objective is to assist future developers, researchers, and users of unconventional visual sensors in understanding in-sensor computing and associated applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call