Binary neural networks (BNNs) are promising for resource-constrained Internet of Things (IoT) devices owing to the lightweight memory and computation requirements. Moreover, BNNs based on computing-in-memory (CIM) architectures have attracted much attention in both algorithm and hardware designs. Recently, a variety of CIM-based BNN hardware designs has been proposed, particularly based on emerging nonvolatile memories (NVMs), which have merits in terms of nonvolatility and intrinsic resistance-based computing capabilities. However, mainstream NVMs utilize the one transistor plus one memory device (1T1M) cell structure, limiting the computing efficiency and throughput. In this article, we propose a high-throughput CIM architecture for BNN hardware based on a voltage-controlled spin–orbit torque (VC-SOT) memory device, which enables parallel programming and computing operations thanks to its specific cell structure. In VC-SOT devices, multiple magnetic tunnel junctions (MTJs) are stacked on a heavy metal and share the same SOT write current. Furthermore, computing can be achieved based on the normal memory-like write and read operations. Based on a physics-based VC-SOT MTJ model, we designed and evaluated the proposed CIM-based key BNN hardware in the 40-nm technology node. Our simulation results validated the parallel programming/computing functionality and illustrated the performance in terms of power consumption (~4 fJ/bit) and speed (~2 ns/write, 0.36–1.5 ns/read).
Read full abstract