Abstract

Spiking Neural Networks (SNN) promise extremely low-power and low-latency inference on neuromorphic hardware. Recent studies demonstrate the competitive performance of SNNs compared with Artificial Neural Networks (ANN) in conventional classification tasks.In this work, we present an energy-efficient implementation of a Reinforcement Learning (RL) algorithm using SNNs to solve an obstacle avoidance task performed by an Unmanned Aerial Vehicle (UAV), taking a Dynamic Vision Sensor (DVS) as event-based input. We train the SNN directly, improving upon state-of-art implementations based on hybrid (not directly trained) SNNs. For this purpose, we devise an adaptation of the Spatio-Temporal Backpropagation algorithm (STBP) for RL. We then compare the SNN with a state-of-art Convolutional Neural Network (CNN) designed to solve the same task. To this aim, we train both networks by exploiting a photorealistic training pipeline based on AirSim. To achieve a realistic latency and throughput assessment for embedded deployment, we designed and trained three different embedded SNN versions to be executed on state-of-art neuromorphic hardware, targeting state-of-the-art.We compared SNN and CNN in terms of obstacle avoidance performance showing that the SNN algorithm achieves better results than the CNN with a factor of 6× less energy. We also characterize the different SNN hardware implementations in terms of energy and spiking activity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call