Abstract
To inaugurate energy-efficient hardware as a solution to complex tasks, information processing paradigms shift from von Neumann to non-von Neumann computing architectures. Emerging electronic devices compete with speed, energy, and performance to revolutionize the neural hardware system where training and inference must achieve milestones. In this Perspective, we discuss the essential criteria for training and inference in various nonvolatile neuromorphic systems such as filamentary resistive switching, interfacial resistive switching, electrochemical random-access memory, and ferroelectric memory. We present a holistic analysis of technical requirements to design ideal neuromorphic hardware in which linearity is the critical aspect during training, whereas retention is the essential criterion of inference. Finally, we evaluate the prospect of a futuristic neuromorphic hardware system by optimizing the training and inference dilemma.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have