Abstract

Spiking neural networks (SNNs) are artificial learning models that closely mimic the time-based information encoding and processing mechanisms observed in the brain. As opposed to deep learning models that use real numbers for information encoding, SNNs use binary spike signals and their arrival times to encode information, which could potentially improve the algorithmic efficiency of computation. However overall system efficiency improvement for learning and inference systems implementing SNNs will depend on the ability to reduce data movement between processor and memory units, and hence in-memory computing architectures employing nanoscale memristive devices that operate at low power would be essential. The requirements and specifications for these devices for realizing SNNs are quite different from those of regular deep learning models. In this chapter we introduce some of the fundamental aspects of spike-based information processing and how nanoscale memristive devices could be used to efficiently implement these algorithms for cognitive applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.