Abstract

Asynchronous event-based sensors, or “silicon retinae,” are a new class of vision sensors inspired by biological vision systems. The output of these sensors often contains a significant number of noise events along with the signal. Filtering these noise events is a common preprocessing step before using the data for tasks such as tracking and classification. This paper presents a novel spiking neural network-based approach to filtering noise events from data captured by an Asynchronous Time-based Image Sensor on a neuromorphic processor, the IBM TrueNorth Neurosynaptic System. The significant contribution of this work is that it demonstrates our proposed filtering algorithm outperforms the traditional nearest neighbor noise filter in achieving higher signal to noise ratio (~10 dB higher) and retaining the events related to signal (~3X more). In addition, for our envisioned application of object tracking and classification under some parameter settings, it can also generate some of the missing events in the spatial neighborhood of the signal for all classes of moving objects in the data which are unattainable using the nearest neighbor filter.

Highlights

  • Inspired by the efficient operation of biological vision, research on neuromorphic event-based image sensors, or “silicon retinae,” took off a few decades back (Mahowald and Mead, 1991)

  • In keeping with the above trend, we propose in this work a set of noise filtering primitives that may be used as a preprocessing block for event-based image processing applications on TrueNorth

  • We presented a novel neural network-based noise filtering (NeuNN) approach and compared it with the typically used nearest neighbor (NNb) noise filter for eventbased image sensors

Read more

Summary

Introduction

Inspired by the efficient operation of biological vision, research on neuromorphic event-based image sensors, or “silicon retinae,” took off a few decades back (Mahowald and Mead, 1991). Unlike conventional image sensors that operate by sampling the scene at a fixed temporal rate (typically between 30 and 60 Hz), these sensors employ level crossing sampling pixels which asynchronously and independently signal an event if sufficient temporal contrast is detected (Posch et al, 2014). This results in a higher dynamic range, lower data rate and lower power consumption compared to frame based imagers.

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call