Abstract

The visual scene in the physical world integrates multidimensional information (spatial, temporal, polarization, spectrum and so on) and typically shows unstructured characteristics. Conventional image sensors cannot process this multidimensional vision data, creating a need for vision sensors that can efficiently extract features from substantial multidimensional vision data. Vision sensors are able to transform the unstructured visual scene into featured information without relying on sophisticated algorithms and complex hardware. The response characteristics of sensors can be abstracted into operators with specific functionalities, allowing for the efficient processing of perceptual information. In this Review, we delve into the hardware implementation of multidimensional vision sensors, exploring their working mechanisms and design principles. We exemplify multidimensional vision sensors built on emerging devices and silicon-based system integration. We further provide benchmarking metrics for multidimensional vision sensors and conclude with the principle of device-system co-design and co-optimization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call