Abstract

Event cameras can perceive pixel-level brightness changes to output asynchronous event streams, and have notable advantages in high temporal resolution, high dynamic range and low power consumption for challenging vision tasks. To apply existing learning models on event data, many researchers integrate sparse events into dense frame-based representations which can work with convolutional neural networks directly. Although these works achieve high performance on event-based classification, their models need lots of parameters to process dense event frames which do not fit with the sparsity of event data. To utilize the sparse nature of events, we propose a voxel-wise graph learning model ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VMV-GCN</i> ) for spatio-temporal feature learning on event streams. Specifically, we design the volumetric multi-view fusion module ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">VMVF</i> ) to extract spatial and temporal information from views of voxelized event data. Then we take representative event voxels as vertices and use a novel dual-graph construction strategy to connect them. By aggregating neighborhood information based on relationships of vertices, the proposed dynamic neighborhood feature learning module ( <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">DNFL</i> ) can capture discriminative spatio-temporal features on dynamically updated graphs. Experiments show that our method achieves state-of-the-art performance with low model complexity on event-based classification tasks, such as object classification and action recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call