Abstract
Deep Spiking Neural Networks (SNNs) with event-driven dynamics become increasingly popular in many challenging Machine Learning applications, based on their cheap and efficient computations. The discontinuity of the SNN dynamics, however, leads to problems in the learning process, resulting in performance loss, as the dominant gradient-based training approaches are not easily adaptable to the discontinuous SNN activation domain. One promising approach develops SNNs by converting trained Deep Neural Networks to SNNs, which has been very successful in classification applications. Recently, the scope of the conversion studies has been extended to Deep Q-Networks (DQNs), and highly competitive performance has been achieved on many challenging Atari games. The present work provides a comprehensive description of the DQN to SNN conversion algorithm and evaluates the causes of the potential performance loss during the conversion process. We analyze three key factors which allow practical implementations without loss of generality for a large class of highly demanding Q-learning problems, including robust conversion rate, threshold percentile, and simulation time. Our results are not only competitive to DQN in terms of performance but also highly efficient, which is extremely beneficial upon implementations on neuromorphic platforms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.