Abstract

Dynamic vision sensors (DVSs) are emerging neuromorphic visual capturing devices, with great advantages in terms of low-power consumption, wide dynamic range, and high temporal resolution in diverse applications, such as autonomous driving, robotics, tactile sensing, and drones. The capturing method results in lower data rates than conventional video. Still, such data can be further compressed. Recent research has shown great benefits of temporal data aggregation on event-based vision data utilization. According to recent results, time aggregation of DVS data not only reduces the data rate but improves classification and object detection accuracy. In this work, we propose a compression strategy, time-aggregation-based lossless video encoding for neuromorphic vision sensor data (TALVEN), which utilizes temporal data aggregation, arrangement of the data in a specific format and lossless video encoding techniques to achieve high compression ratios. The detailed experimental analysis on outdoor and indoor datasets shows that our proposed strategy achieves superior compression ratios than the best state-of-the-art strategies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call