Abstract

The recent advance of artificial intelligence (AI) has shown remarkable success for numerous tasks, such as cloud computing, deep-learning, neural network and so on. Most of those applications rely on fast computation and large storage, which brings various challenges to the hardware platform. The hardware performance is the bottle neck to break through and therefore, there is a lot of interest in exploring new solutions for computation architecture in recent years. Compute-in-memory (CIM) has drawn attention to the researchers and it is considered as one of the most promising candidates to solve the above challenges. Computing-In-memory is an emerging technique to fulfill the fast-growing demand for high-performance data processing. This technique offers fast processing, low power and high performance by blurring the boundary between processing cores and memory units. One key aspect of CIM is performing matrix-vector multiplication (MVM) or dot product operation through intertwining of processing and memory elements. As the primary computational kernel in neural networks, dot product operation is targeted to be improved in terms of its performance. In this paper, we present the design, implementation and analysis of quantum-dot transistor (QDT) based CIM, from the multi-bit multiplier to the dot product unit, and then the in-memory computing array.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.