Abstract
As the third generation of neural networks, spiking neural networks (SNNs) have gained much attention recently because of their high energy efficiency on neuromorphic hardware. However, training deep SNNs requires many labeled data that are expensive to obtain in real-world applications, as traditional artificial neural networks (ANNs). In order to address this issue, transfer learning has been proposed and widely used in traditional ANNs, but it has limited use in SNNs. In this article, we propose an effective transfer learning framework for deep SNNs based on the domain in-variance representation. Specifically, we analyze the rationality of centered kernel alignment (CKA) as a domain distance measurement relative to maximum mean discrepancy (MMD) in deep SNNs. In addition, we study the feature transferability across different layers by testing on the Office-31, Office-Caltech-10, and PACS datasets. The experimental results demonstrate the transferability of SNNs and show the effectiveness of the proposed transfer learning framework by using CKA in SNNs.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.