Abstract

Dynamic Vision Sensor (DVS) pixels produce an asynchronous variable-rate address-event output that represents brightness changes at the pixel. Since these sensors produce frame-free output, they are ideal for real-time dynamic vision applications with real-time latency and power system constraints. Event-based filtering algorithms have been proposed to post-process the asynchronous event output to reduce sensor noise, extract low level features, and track objects, among others. These postprocessing algorithms help to increase the performance and accuracy of further processing for tasks such as classification using spike-based learning (ie. ConvNets), stereo vision, and visually-servoed robots, etc. This paper presents an FPGA-based library of these postprocessing event-based algorithms with implementation details; specifically background activity (noise) filtering, pixel masking, object motion detection and object tracking. The latencies of these filters on the Field Programmable Gate Array (FPGA) platform are below 300ns with an average latency reduction of 188% (maximum of 570%) over the software versions running on a desktop PC CPU. This open-source event-based filter IP library for FPGA has been tested on two different platforms and scenarios using different synthesis and implementation tools for Lattice and Xilinx vendors.

Highlights

  • IntroductionPixel in these vision sensors, models simple ON and OFF retinal ganglion cells

  • Dynamic Vision Sensors (DVSs) [1], [2] mimic part of the biological retina’s functionality in silicon chips using an asynchronous output representation called Address Event Representation (AER) [3], [4]

  • In this work, we have developed a library of IP blocks for event-based visual processing for Field Programmable Gate Array (FPGA) using VHDL

Read more

Summary

Introduction

Pixel in these vision sensors, models simple ON and OFF retinal ganglion cells. A sensed logintensity brightness change by any of the pixels is sent out in. Typically less than 1ms after it is produced This architecture is radically different to frame-based cameras used in artificial vision. Conventional cameras measure the intensity over a short period of time (exposure time) in all the pixels, and they send out the entire frame. This frame transfer is done even though in many scenarios, only a few pixels have changed since the last captured frame

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.