Abstract

Artificial neural networks have acquired remarkable achievements in the field of artificial intelligence. However, it suffers from catastrophic forgetting when dealing with continual learning problems, i.e., the loss of previously learned knowledge upon learning new information. Although several continual learning algorithms have been proposed, it remains a challenge to implement these algorithms efficiently on conventional digital systems due to the physical separation between memory and processing units. Herein, a software–hardware codesigned in‐memory computing paradigm is proposed, where a mixed‐precision continual learning (MPCL) model is deployed on a hybrid analogue–digital hardware system equipped with resistance random access memory chip. Software‐wise, the MPCL effectively alleviates catastrophic forgetting and circumvents the requirement for high‐precision weights. Hardware‐wise, the hybrid analogue–digital system takes advantage of the colocation of memory and processing units, greatly improving energy efficiency. By combining the MPCL with an in situ fine‐tuning method, high classification accuracies of 94.9% and 95.3% (software baseline 97.0% and 97.7%) on the 5‐split‐MNIST and 5‐split‐FashionMNIST are achieved, respectively. The proposed system reduces ≈200 times energy consumption of the multiply‐and‐accumulation operations during the inference phase compared to the conventional digital systems. This work paves the way for future autonomous systems at the edge.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call