Abstract
Memristors show great promise in neuromorphic computing owing to their high-density integration, fast computing and low-energy consumption. However, the non-ideal update of synaptic weight in memristor devices, including nonlinearity, asymmetry and device variation, still poses challenges to the in-situ learning of memristors, thereby limiting their broad applications. Although the existing offline learning schemes can avoid this problem by transferring the weight optimization process into cloud, it is difficult to adapt to unseen tasks and uncertain environments. Here, we propose a bi-level meta-learning scheme that can alleviate the non-ideal update problem, and achieve fast adaptation and high accuracy, named Rapid One-step Adaption (ROA). By introducing a special regularization constraint and a dynamic learning rate strategy for in-situ learning, the ROA method effectively combines offline pre-training and online rapid one-step adaption. Furthermore, we implemented it on memristor-based neural networks to solve few-shot learning tasks, proving its superiority over the pure offline and online schemes under noisy conditions. This method can solve in-situ learning in non-ideal memristor networks, providing potential applications of on-chip neuromorphic learning and edge computing.
Highlights
Memristors are considered as leading device candidates for neural network accelerators (Yang et al, 2013; Chen et al, 2015; Tsai et al, 2018; Zidan et al, 2018) due to their ability to physically store synaptic weights in conductance state, which enable in-memory computing
We developed a bi-level meta-learning scheme, Rapid One-step Adaption (ROA), for memristor neural networks
It is a hybrid approach that combines online learning and offline learning, which can effectively alleviate the impact of the non-ideal properties of memristors through one update step
Summary
Memristors are considered as leading device candidates for neural network accelerators (Yang et al, 2013; Chen et al, 2015; Tsai et al, 2018; Zidan et al, 2018) due to their ability to physically store synaptic weights in conductance state, which enable in-memory computing. We propose a meta-learning scheme for memristor-based neural networks that can overcome the nonideal synapse weights for training and provide improved performance. A rapid training in one-step adaption is performed for an unseen task with a few samples of the in-situ hardware network This scheme can free the memristor networks from unnecessary operations, mitigating the problem of performance degradation in online learning. Since only one update step is needed, a new task requires significantly less training time, only a few samples and little computation consumption These merits make our scheme very suitable for situations with limited computing power and limited data, such as edge computing. We propose a hybrid learning scheme of offline learning and online learning for meta-learning on memristor-based neural networks It combines the advantages of offline learning and online learning for the hardware to achieve high accuracy and fast adaption for unseen tasks. Our results reveal a good performance of memristor networks on few-shot learning task with significant improvement of accuracy than the baseline
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.