Abstract

Spiking Neural Networks (SNN) have gained increasing attention for its low power consumption. But training SNN is challenging. Liquid State Machine (LSM), as a major type of Reservoir computing, has been widely recognized for its low training cost among SNNs. The exploration of LSM topology for enhancing performance often requires a hyper-parameter search, which is both resource- expensive and time-consuming. We explore the influence of input scale reduction on LSM instead. There are two main reasons for studying the input reduction of LSM. One is that the input dimension of large images requires efficient processing. Another one is that there exists redundancy in images which provides opportunities for compression. To mitigate the difficulty in effectively dealing with huge input spaces of LSM, and to find that whether input reduction can enhance LSM performance, we explore several input patterns, namely fullscale, scanline, chessboard, and patch. Several datasets have been used to evaluate the performance of the proposed input patterns, including two frame-based image datasets and one event-based image database. The experimental results show that the reduced input from chessboard pattern improves the accuracy by up to 5%, and reduces execution time by up to 50% with up to 75% less input storage than the fullscale input pattern for LSM.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.