Abstract

Recently, new sensors with active pixels were brought to market. These sensors export local variations of light intensity in the form of asynchronous events with low latency. Since the data output format is a stream of addressable events and not a complete image of light intensities, new algorithms are required for known problems in the field of Computer Vision, such as segmentation, VO, SLAM, object, and scene recognition. There are some proposed methodologies for object recognition using conventional methods, convolutional neural networks, and third-generation neural networks based on spikes. However, convolutional neural networks and spike neural networks require specific hardware for processing, hard to miniaturize. Also, several traditional Computer Vision operators and feature descriptors have been neglected in the context of event sensors and could contribute to lighter methodologies in object recognition. This paper proposes an algorithm for local binary pattern extraction in sparse structures, typically found in this context. This paper also proposes two methodologies using local binary patterns to captures with event-based sensors for object recognition. The first methodology exploits the known motion performed by the sensor, while the second is motion agnostic. It is demonstrated experimentally that the LBP operator is a fast and light alternative that enables variable reduction using PCA in some cases. The experiments also show that it is possible to reduce the final feature vector for classification by up to 99,73% when compared to conventional methods considered state-of-the-art while maintaining comparable accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call