Abstract

In recent years, with the continuous development and deepening of artificial intelligence, the amount of data required has increased dramatically. With the sharp increase in the amount of data, data access between the memory and the computing unit will cause a lot of energy consumption and increase the read and write latency, that is, the “memory wall” problem occurs. In order to solve the extra energy loss caused by the “memory wall” problem in the Von neumann architecture and to achieve better acceleration for neural network algorithms, the in-memory computing architecture is considered a promising processor architecture for future big data applications. In this thesis, we conducted research related to the in-memory computing neural network accelerator architecture based on three kinds of memories as Static Random-Access Memory (SRAM), Resistive Random-Access Memory (ReRAM), and Ferro-electric Field Effect Transistor (FeFET). For non volatile memory, such as ReRAM, FeFET, its non-volatile, low power consumption and other characteristics show obvious advantages. Using VGG-8, VGG-16, and AlexNet networks, the designed in-memory computing architecture is validated on three different datasets. The performance of the designed gas pedal is evaluated in terms of the acceleration effect of each layer and the number of Tile units used in each layer, read/write latency, read/write energy consumption, system occupied area, energy efficiency ratio, and other indicators, the experimental results show that the neural network accelerator based on in-memory calculation designed in this thesis has reached a relatively advanced level.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call