Abstract
Power consumption and data processing speed of integrated circuits (ICs) is an increasing concern in many emerging Artificial Intelligence (AI) applications, such as autonomous vehicles and Internet of Things (IoT). Existing state-of-the-art SRAM architectures for AI computing are highly accurate and can provide high throughput. However, these SRAMs have problems that they consume high power and occupy a large area to accommodate complex AI models. A carbon nanotube field-effect transistors (CNFET) device has been reported as a potential candidates for AI devices requiring ultra-low power and high-throughput due to their satisfactory carrier mobility and symmetrical, good subthreshold electrical performance. Based on the CNFET and FinFET device’s electrical performance, we propose novel ultra-low power and high-throughput 8T SRAMs to circumvent the power and the throughput issues in Artificial Intelligent (AI) computation for autonomous vehicles. We propose two types of novel 8T SRAMs, P-Latch N-Access (PLNA) 8T SRAM structure and single-ended (SE) 8T SRAM structure, and compare the performance with existing state-of-the-art 8T SRAM architectures in terms of power consumption and speed. In the SRAM circuits of the FinFET and CNFET, higher tube and fin numbers lead to higher operating speed. However, the large number of tubes and fins can lead to larger area and more power consumption. Therefore, we optimize the area by reducing the number of tubes and fins without compromising the memory circuit speed and power. Most importantly, the decoupled reading and writing of our new SRAMs cell offers better low-power operation due to the stacking of device in the reading part, as well as achieving better readability and writability, while offering read Static Noise Margin (SNM) free because of isolated reading path, writing path, and greater pull up ratio. In addition, the proposed 8T SRAMs show even better performance in delay and power when we combine them with the collaborated voltage sense amplifier and independent read component. The proposed PLNA 8T SRAM can save 96%, while the proposed SE 8T SRAM saves around 99% in writing power consumption compared with the existing state-of-the-art 8T SRAM in FinFET model, as well as 99% for writing operation in CNFET model.
Highlights
The low power and high throughput issues are becoming increasingly important inVery large-scale integration (VLSI) system, microprocessor, and Static Random Access Memory (SRAM) designs for Artificial Intelligent (AI) application, such as in autonomous vehicles [1] and Internet of Things (IoT) [2]
We propose a reliable low power high throughput SRAM that is useful for AI computation, using these advantages of Fin Field Effect Transistor (FinFET) and Carbon Nanotube Filed Effect Transistor (CNFET) to secure these conventional SRAM shortcomings
Based on the electrical performance of the CNFET device, our proposed SRAM based on the satisfactory carrier mobility and good subthreshold electrical performance, has indicated a performance improvement of 99% reduction in power consumption and 97% improvement in delay compared to the state-of-the art SRAM with same condition
Summary
Very large-scale integration (VLSI) system, microprocessor, and Static Random Access Memory (SRAM) designs for Artificial Intelligent (AI) application, such as in autonomous vehicles [1] and Internet of Things (IoT) [2]. (2) As the deep learning models for autonomous vehicles become more and more complex, the area of the chip for deep learning is getting bigger, and this large areas causes the power consumption problem of the integrated circuits (ICs) For these reasons, many recent chip designers are looking for resolutions to achieve high power efficiency without compromising the chip speed and stability. Intel is developing low-power chips optimized for self-driving cars, Tesla is developing its own low-power chips for Autopilot, and Qualcomm is building the necessary communication hardware with low power and efficiency in mind [14] From this point of view, our proposed new SRAM aims to support future AI chip development by proposing a reliable low-power and high-throughput method for the memory that occupies the most power consumption in these AI chips for reducing the power consumption
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.