AbstractFor convolution neural networks, increasing the performance of hardware computer systems is crucial in the era of big data. Benefiting from the neuromorphic devices, producing the convolutional calculation at the crossbar array circuit has become a promising approach for high‐performance hardware computer systems. However, as computation scales, this hardware system faces the challenge of low resource utilization efficiency and low power efficiency. Here, a novel pixel‐level strategy, leveraging the dynamic change of electron concentration as the process of convolution calculation, is proposed for high‐performance hardware computer systems. Compared with the crossbar array circuit‐based strategy, instead of at least four devices, raised the power efficiency to 413% and decreased the training epochs to 38%. This work presents a novel physics‐based approach that enables highly efficient convolutional calculation, improves power efficiency, speeds up convergency, and paves the way for building complex convolution neural networks with large‐scale convolutional computation capabilities.
Read full abstract