Abstract

Embedding advanced cognitive capabilities in battery-constrained edge devices requires specialized hardware with new circuit architecture and – in the medium/long term - new device technology. We evaluate the potential of recently investigated devices based on 2D materials for the realization of analog deep neural networks, by comparing the performance of neural networks based on the same circuit architecture using three different device technologies for transistors and analog memories. As a reference result, it is included in the comparison also an implementation on a standard 0.18 μm CMOS technology. Our architecture of choice makes use of current-mode analog vector-matrix multipliers based on programmable current mirrors consisting of transistors and floating-gate non-volatile memories. We consider experimentally demonstrated transistors and memories based on a monolayer Molibdenum Disulfide channel and ideal devices based on heterostructures of multilayer-monolayer PtSe2. Following a consistent methodology for device-circuit co-design and optimization, we estimate layout area, energy efficiency and throughput as a function of the equivalent number of bits (ENOB), which is strictly correlated to classification accuracy. System-level tradeoffs are apparent: for a small ENOB experimental MoS2 floating-gate devices are already very promising; in our comparison a larger ENOB (7 bits) is only achieved with CMOS, signaling the necessity to improve linearity and electrostatics of devices with 2D materials.

Highlights

  • The pervasive success of deep learning in artificial intelligence applications [1] is accelerating research efforts towards specialized hardware with optimized computer architecture, circuit design and even device technology

  • The main effect is a shift from the general-purpose Von Neumann paradigm to specialized hardware that leverages the properties of deep neural network (DNN) algorithms [2]

  • In order to provide a reference value for the equivalent number of bits (ENOB), one should note that we have previously proven that a Vector-Matrix Multipliers (VMMs) with an ENOB of 6-bit can guarantee a 99.7% classification accuracy of the

Read more

Summary

INTRODUCTION

The pervasive success of deep learning in artificial intelligence applications [1] is accelerating research efforts towards specialized hardware with optimized computer architecture, circuit design and even device technology. It has been demonstrated that the inference operations with a reduced multi-bit precision can reach a comparable classification accuracy to floating-point arithmetics due to the resilience to disturbs of the learning algorithms [9], [11] This opens to the possibility to perform computation in the analog domain by exploiting the device physics and circuit properties (e.g. Kirchhoff laws) [9], [10], [12], [13]. We consider this very same circuit architecture for a comparison of different 2D device technologies, so that we can use as reference a case for which we have a full range of experimental results.

BASIC OPERATION OF AN IN-MEMORY ANALOG VECTORMATRIX MULTIPLIER
SYSTEM LEVEL TESTBENCH
FIGURES OF MERIT FOR NEURAL NETWORK BENCHMARKING
DEVICES BASED ON 2D -MATERIALS FOR ANALOG DNNS
Standard CMOS benchmark
MoS2 FGFET
Planar MoS2 transistor
PtSe2 LH-FET
PROGRAMMABLE CURRENT MIRROR DESIGN
BENCHMARK AND DISCUSSION
Findings
VIII. CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call