Abstract

Event cameras are bio-inspired sensors that respond to pixel-level brightness changes in the form of asynchronous and sparse events. Each event is a 4-D tuple, i.e., (timestamp, x, y, polarity). Recently, algorithms for object detection based on learning have made considerable strides for event cameras. These methods transform event sequences into a tensor-like representation which can be processed by deep learning methods. However, these conversion methods don’t make full use of the polarity information of event sequences. We discover that tensor-like representations of different polarities show different features. Thus, we propose Polar Loss, a loss function that can enhance the difference between tensor-like representations of different polarities. To evaluate the effectiveness of our loss function, we design a network architecture for object detection. Our results state that our model shows good flexibility and expressiveness on event-based object detection when trained with the Polar Loss.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call