Abstract

Neuromorphic hardware equipped with learning capabilities can adapt to new, real-time data. While models of Spiking Neural Networks (SNNs) can now be trained using gradient descent to reach an accuracy comparable to equivalent conventional neural networks, such learning often relies on external labels. However, real-world data is unlabeled which can make supervised methods inapplicable. To solve this problem, we propose a Hybrid Guided Variational Autoencoder (VAE) which encodes event based data sensed by a Dynamic Vision Sensor (DVS) into a latent space representation using an SNN. These representations can be used as an embedding to measure data similarity and predict labels in real-world data. We show that the Hybrid Guided-VAE achieves 87% classification accuracy on the DVSGesture dataset and it can encode the sparse, noisy inputs into an interpretable latent space representation, visualized through T-SNE plots. We also implement the encoder component of the model on neuromorphic hardware and discuss the potential for our algorithm to enable real-time learning from real-world event data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call