Abstract

During the last few years, deep learning techniques are frequently applied in large-scale image processing, detection in a variety of computer vision, cognitive tasks, and information analysis applications. The execution of deep learning algorithms like CNN and FCNN requires high dimensional matrix multiplication, which contributes to significant computational power. The frequent data movement between memory and core is one of the main reasons for considerable power consumption and latency, hence becoming a major performance bottleneck for conventional computing systems. To address this challenge, we propose an in-memory computing array that can perform computation directly within the memory, hence, reducing the overhead associated with data movement. The proposed Random-Access Memory with in-situ Processing (RAMP) array reconfigures the emerging magnetic random-access memory to realize logic and arithmetic functions inside the memory. Furthermore, the array supports independent operations over multiple rows and columns, which helps in accelerating the execution of matrix operations. To validate the functionality and evaluate the performance of the proposed array, we perform extensive spice simulations. At 45nm, the proposed array takes 5.39 ns, 0.68 ns, 0.68 ns, 0.7 ns and consumes 2.2 pJ/bit, 0.21 pJ/bit, 0.23 pJ/bit, 0.7 pJ/bit while performing a memory write, memory read, logic, and arithmetic operations respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.