Abstract

As the third generation of neural networks, spiking neural networks (SNNs) have gained much attention recently because of their high energy efficiency on neuromorphic hardware. However, training deep SNNs requires many labeled data that are expensive to obtain in real-world applications, as traditional artificial neural networks (ANNs). In order to address this issue, transfer learning has been proposed and widely used in traditional ANNs, but it has limited use in SNNs. In this article, we propose an effective transfer learning framework for deep SNNs based on the domain in-variance representation. Specifically, we analyze the rationality of centered kernel alignment (CKA) as a domain distance measurement relative to maximum mean discrepancy (MMD) in deep SNNs. In addition, we study the feature transferability across different layers by testing on the Office-31, Office-Caltech-10, and PACS datasets. The experimental results demonstrate the transferability of SNNs and show the effectiveness of the proposed transfer learning framework by using CKA in SNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call