Abstract

The fastest supercomputer, Summit, has a speed comparable to the human brain, but is much less energy-efficient (≈1010 FLOPS W-1 , floating point operations per second per watt) than the brain (≈1015 FLOPS W-1 ). The brain processes and learns from "big data" concurrently via trillions of synapses in parallel analog mode. By contrast, computers execute algorithms on physically separated logic and memory transistors in serial digital mode, which fundamentally restrains computers from handling "big data" efficiently. The existing electronic devices can perform inference with high speeds and energy efficiencies, but they still lack the synaptic functions to facilitate concurrent convolutional inference and correlative learning efficiently like the brain. In this work, synaptic resistors are reported to emulate the analog convolutional signal processing, correlative learning, and nonvolatile memory functions of synapses. By circumventing the fundamental limitations of computers, a synaptic resistor circuit performs speech inference and learning concurrently in parallel analog mode with an energy efficiency of ≈1.6 × 1017 FLOPS W-1 , which is about seven orders of magnitudes higher than that of the Summit supercomputer. Scaled-up synstor circuits could circumvent the fundamental limitations in computers, and facilitate real-time inference and learning from "big data" with high efficiency and speed in intelligent systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call