Abstract

Artificial intelligence (AI) is escalating rapidly in most applications nowadays. However, the current artificial hardware system couldn’t meet the demand of AI considering the energy and latency budget. The imaging hardware system is a crucial example, wherein more than 90% of the data generated by sensors is redundant and processed indiscriminately for AI applications like classification and recognition. Thus, lots of energy and time are wasted irrationally. Therefore, it is necessary to develop novel insensor computing architectures mimicking the functions of the human retina, which is highly intelligent and energy-efficient to deal with tremendous data. Most current practical vision chips (in image sensor computing chips) are based on CMOScompatible fabrication technologies. However, the sensing signals are in analog format, and the related memory and processing devices are bulky in size, making it challenging to build a large-scale neuro-network. Emerging technologies include developing 1, A new material system that is compact in size to deal with sensing and processing; 2, An advanced device structure that combines the sensing and computing functionalities. The ultimate goal for in-sensor computing is to achieve efficient artificial intelligent hardware that are low power consumption, high speed, high resolution, highaccuracy recognition, large-scale integration, and programmable.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call