Abstract
Event cameras are an emerging technology in computer vision, offering extremely low latency and bandwidth, as well as a high temporal resolution and dynamic range. Inherent data compression is achieved as pixel data is only produced by contrast changes at the edges of moving objects. However, current trends in state-of-the-art visual algorithms rely on deep-learning with networks designed to process colour and intensity information contained in dense arrays, but are notoriously computationally heavy. While the combination of these visual technologies could lead to fast, efficient, and accurate detection and recognition algorithms, it is uncertain whether the compressed event-camera data actually contain the required information for these techniques to discriminate between objects and a cluttered background. This paper presents a pilot study in which off-the-shelf deep-learning is applied to visual events for object detection on the iCub robotic platform, and analyses the impact of temporal integration of the event data. We also present a novel pipeline that bootstraps event-based dataset annotation from mature frame-based algorithms, in order to more quickly generate the required datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.