Abstract

Event-driven (ED) cameras are an emerging technology that sample the visual signal based on changes in the signal magnitude, rather than at a fixed-rate over time. The change in paradigm results in a camera with a lower latency, that uses less power, has reduced bandwidth, and higher dynamic range. Such cameras offer many potential advantages for on-line, autonomous, robots; however the sensor data does not directly integrate with current image-based frameworks and software libraries. The iCub robot uses Yet Another Robot Platform (YARP) as middleware to provide modular processing and connectivity to sensors and actuators. This paper introduces a library that incorporates an event-based framework into the YARP architecture, allowing event cameras to be used with the iCub (and other YARP-based) robots. We describe the philosophy and methods for structuring events to facilitate processing, while maintaining low-latency and real-time operation. We also describe several processing modules made available open-source, and three example demonstrations that can be run on the neuromorphic iCub.

Highlights

  • Conventional vision sensors used in robotics rely on the acquisition of sequences of static images at fixed temporal intervals

  • This paper introduces the event-driven software libraries and infrastructure that is built upon Yet Another Robot Platform (YARP) and integrates with the iCub robot

  • On the iCub robot, a Linux driver reads the events from the camera FPGA interface and the zynqGrabber module exposes the data on a YARP port

Read more

Summary

INTRODUCTION

Conventional vision sensors used in robotics rely on the acquisition of sequences of static images at fixed temporal intervals Such a sensor provides the most information when the temporal dynamics of the scene match the sample-rate. Modules include pre-processing utilities, visualization, low-level event-driven vision processing algorithms (e.g., corner detection), and robot behavior applications. These modules can be run and used by anyone for purely vision-based tasks, without the need for an iCub robot by using: pre-recorded datasets, a “stand-alone” camera with a compatible FPGA, a “stand-alone” camera with the compatible USB connection, or by contributing a custom camera interface to the open-source library. We begin with a brief description of the current state-ofthe-art in ED vision for robotics

EVENT-DRIVEN VISION FOR ROBOTS
THE EVENT-DRIVEN LIBRARY
Representing an Event
Event-Packets in YARP
Structuring the Event-Stream
Low-Level Processing
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.