In‐sensor computing architecture has a great advantage especially in massive data sampling, transfer, and processing compared with the separated intelligent sensor systems. However, most of the in‐sensor computing device is proposed based on the traditional neural network model, where the synapse performs linear multiplication of input and weight. This approach fails to make the most use of the nonlinearity of in‐sensor computing devices. Therefore, in this article, first a modified feedforward neural network model with the nonlinear in‐sensor computing synapse (NSCS) located at the input layer is presented, and the backpropagation (BP) algorithm is modified to train the network. Then, the nonlinear characteristics of the NSCS composed of the spin‐transfer‐torque magnetic tunnel junction (STT–MTJ) devices and simple complementary metal‐oxide‐semiconductor (CMOS) circuit are analyzed. Based on the nonlinear response of STT–MTJ NSCS, the small‐scale network with NSCS synapse is experimented on the Modified National Institute of Standards and Technology dataset and compared with the traditional network of the same network size. In the simulation result, it is shown that better performance can be achieved with the STT–MTJ NSCS, including a 2–15 times improvement in convergence speed and a 2.5%–5.1% increase in accuracy.
Read full abstract